PyNN Documentation

PyNN Documentation
PyNN Documentation
Release 0.8.2
the PyNN community
January 13, 2017
Contents
1
Introduction
3
2
Installation
5
3
Quickstart
9
4
Building networks
11
5
Injecting current
27
6
Recording spikes and state variables
31
7
Data handling
33
8
Simulation control
39
9
Model parameters and initial values
41
10 Random numbers
45
11 Backends
49
12 Running parallel simulations
57
13 Units
59
14 Examples
61
15 Contributors and licence
101
16 Release notes
109
17 Developers’ Guide
133
18 API reference
139
19 Old documents
165
20 Indices and tables
169
Python Module Index
171
i
ii
PyNN Documentation, Release 0.8.2
This is the documentation for version 0.8.2. For versions 0.7 and earlier, see http://neuralensemble.org/trac/PyNN
Contents
1
PyNN Documentation, Release 0.8.2
2
Contents
CHAPTER 1
Introduction
PyNN (pronounced ‘pine’) is a simulator-independent language for building neuronal network models.
In other words, you can write the code for a model once, using the PyNN API and the Python programming language,
and then run it without modification on any simulator that PyNN supports (currently NEURON, NEST and Brian) as
well as on certain neuromorphic hardware systems.
Note: in PyNN 0.8.1, only NEST, NEURON and Brian are supported. Support for other simulators (e.g. Brian 2,
MOOSE) and for neuromorphic hardware will be (re)introduced later in the 0.8 development cycle.
The PyNN API aims to support modelling at a high-level of abstraction (populations of neurons, layers, columns and
the connections between them) while still allowing access to the details of individual neurons and synapses when required. PyNN provides a library of standard neuron, synapse and synaptic plasticity models, which have been verified
to work the same on the different supported simulators. PyNN also provides a set of commonly-used connectivity
algorithms (e.g. all-to-all, random, distance-dependent, small-world) but makes it easy to provide your own connectivity in a simulator-independent way, either using the Connection Set Algebra (Djurfeldt, 2012) or by writing your
own Python code.
Even if you don’t wish to run simulations on multiple simulators, you may benefit from writing your simulation code
using PyNN’s powerful, high-level interface. In this case, you can use any neuron or synapse model supported by your
simulator, and are not restricted to the standard models. PyNN transparently supports distributed simulations (using
MPI) where the underlying simulator does.
It is straightforward to port an existing model from a Python-supporting simulator to PyNN, since this can be done
incrementally, replacing one piece of simulator-specific code at a time with the PyNN equivalent, and testing that the
model behaviour is unchanged at each step.
Download the current stable release of the library (0.8.1) or get the development version from the Git repository .
1.1 Licence
The code is released under the CeCILL licence. (This is equivalent to and compatible with the GPL).
1.2 Citing PyNN
If you publish work using or mentioning PyNN, we would appreciate it if you would cite the following paper:
Davison AP, Brüderle D, Eppler JM, Kremkow J, Muller E, Pecevski DA, Perrinet L and Yger P (2009) PyNN: a
common interface for neuronal network simulators. Front. Neuroinform. 2:11 doi:10.3389/neuro.11.011.2008.
3
PyNN Documentation, Release 0.8.2
1.3 Questions/Bugs/Enhancements
If you find a bug in PyNN, or wish to request a new feature, please go the PyNN issue tracker, click on “New Issue”,
and fill in the form.
If you have questions or comments about PyNN, please post a message on the NeuralEnsemble Google group.
4
Chapter 1. Introduction
CHAPTER 2
Installation
The following instructions are for Linux and Mac OS X. It should be possible to install and run PyNN on Windows,
but this has not been tested.
Installing PyNN requires:
• Python (version 2.6, 2.7, 3.3, 3.4 or 3.5)
• a recent version of the NumPy package
• the lazyarray package
• the Neo package
• at least one of the supported simulators: e.g. NEURON, NEST, or Brian.
Optional dependencies are:
• mpi4py (if you wish to run distributed simulations using MPI)
• either Jinja2 or Cheetah (templating engines)
• the CSA library
2.1 Installing PyNN
Note: if using NEURON, it is easiest if you install NEURON before you install PyNN (see below).
The easiest way to get PyNN is to use pip:
$ pip install pyNN
If you are running Debian or Ubuntu, there are binary packages available. If you would prefer to install manually, download the latest source distribution, then run the setup script, e.g.:
$ tar xzf PyNN-0.8.2.tar.gz
$ cd PyNN-0.8.2
$ python setup.py install
This will install it to your Python site-packages directory, and may require root privileges. We strongly recommend, however, that you use a virtualenv or a Conda environment. We assume you have already installed the
simulator(s) you wish to use it with. If this is not the case, see below for installation instructions.
Test it using something like the following:
5
PyNN Documentation, Release 0.8.2
>>> import pyNN.nest as sim
>>> sim.setup()
>>> sim.end()
(This assumes you have NEST installed).
With NEURON as the simulator, make sure you install NEURON before you install PyNN. The PyNN installation
will then compile PyNN-specific membrane mechanisms, which are loaded when importing the neuron module:
>>> import pyNN.neuron as sim
NEURON -- Release 7.4 (1370:16a7055d4a86) 2015-11-09
Duke, Yale, and the BlueBrain Project -- Copyright 1984-2015
See http://www.neuron.yale.edu/neuron/credits
loading membrane mechanisms from /home/docker/dev/PyNN/pyNN/neuron/nmodl/x86_64/.libs/libnrnmech.so
Additional mechanisms from files
adexp.mod alphaisyn.mod alphasyn.mod expisyn.mod gap.mod gsfa_grr.mod hh_traub.mod
izhikevich.mod netstim2.mod refrac.mod reset.mod stdwa_guetig.mod stdwa_softlimits.mod
stdwa_songabbott.mod stdwa_symm.mod stdwa_vogels2011.mod tmgsyn.mod tmisyn.mod
tsodyksmarkram.mod vecstim.mod
If you installed PyNN before installing NEURON, or if you update your PyNN installation, you will need to manually
run nrnivmodl in the pyNN/neuron/nmodl directory.
2.2 Installing NEURON
Download the sources for the latest release of NEURON, in .tar.gz format,
http://www.neuron.yale.edu/neuron/download/getstd. Also download Interviews from the same location.
from
Compile
Interviews
and
NEURON
according
to
the
instructions
given
at
http://www.neuron.yale.edu/neuron/static/download/compilestd_unix.html, except that when you run configure,
add the options --with-nrnpython and, optionally, --with-paranrn, i.e.:
$ ./configure --prefix=`pwd` --with-nrnpython --with-paranrn
$ make
$ make install
Make sure that you add the Interviews and NEURON bin directories to your path. Test that the Python support has
been enabled by running:
$ nrniv -python
NEURON -- Release 7.4 (1370:16a7055d4a86) 2015-11-09
Duke, Yale, and the BlueBrain Project -- Copyright 1984-2015
See http://www.neuron.yale.edu/neuron/credits
>>> import hoc
>>> import nrn
Now you can compile and install NEURON as a Python package:
$ cd src/nrnpython
$ python setup.py install
Now test everything worked:
$ python
>>> import neuron
NEURON -- Release 7.4 (1370:16a7055d4a86) 2015-11-09
6
Chapter 2. Installation
PyNN Documentation, Release 0.8.2
Duke, Yale, and the BlueBrain Project -- Copyright 1984-2015
See http://www.neuron.yale.edu/neuron/credits
If you run into problems, check out the NEURON Forum.
2.3 Installing NEST and PyNEST
NEST 2.10 can be downloaded from http://www.nest-simulator.org/download/. Earlier versions of NEST will not
work with this version of PyNN. The full installation instructions are available in the file INSTALL, which you can
find in the NEST source package, or at http://www.nest-simulator.org/installation/.
Note: NumPy must be installed before installing NEST.
Note: Make sure you have the GNU Scientific Library (GSL) installed, otherwise some PyNN standard models (e.g.
IF_cond_exp) will not be available.
On Linux, most Unix variants and Mac OS X, installation is usually as simple as:
$ ./configure --with-mpi
$ make
$ make install
This will install PyNEST to your Python site-packages directory. If you wish to install it elsewhere, see the full
installation instructions.
Now try it out:
$ cd ~
$ python
>>> import nest
-- N E S T -Copyright (C) 2004 The NEST Initiative
Version 2.10.0 Mar 21 2016 21:29:37
...
>>> nest.Models()
(u'ac_generator', u'aeif_cond_alpha', u'aeif_cond_alpha_RK5', u'aeif_cond_alpha_multisynapse',
...
Check that ’aeif_cond_alpha’ is in the list of models. If it is not, you may need to install a newer version of the
GNU Scientific Library and then recompile NEST.
2.4 Installing Brian
Instructions for downloading and installing Brian are available from http://briansimulator.org/download/. Note that
this version of PyNN works with Brian 1.4, but not with Brian 2.
2.3. Installing NEST and PyNEST
7
PyNN Documentation, Release 0.8.2
8
Chapter 2. Installation
CHAPTER 3
Quickstart
to write...
9
PyNN Documentation, Release 0.8.2
10
Chapter 3. Quickstart
CHAPTER 4
Building networks
4.1 Building networks: neurons
4.1.1 Cell types
In PyNN, the system of equations that defines a neuronal model is encapsulated in a CellType class. PyNN provides
a library of “standard” cell types (see Standard models) which work the same across all backend simulators - an
example is the IF_cond_exp model - an integrate-and-fire (I&F) neuron with conductance-based, exponentialdecay synapses. For any given simulator, it is also possible to wrap a native model - a NEST or NEURON model, for
example - in a CellType class so that it works in PyNN (see documentation for individual Backends).
It should be noted that these “cell types” are mathematical cell types. Two or more different biological cell types
may be represented by the same mathematical cell type, but with different parameterizations. For example, in the
thalamocortical model of Destexhe (2009), thalamocortical relay neurons and cortical neurons are both modelled with
the adaptive exponential I&F neuron model (AdExp):
>>>
>>>
...
...
...
>>>
>>>
refractory_period = RandomDistribution('uniform', [2.0, 3.0], rng=NumpyRNG(seed=4242))
ctx_parameters = {
'cm': 0.25, 'tau_m': 20.0, 'v_rest': -60, 'v_thresh': -50, 'tau_refrac': refractory_period,
'v_reset': -60, 'v_spike': -50.0, 'a': 1.0, 'b': 0.005, 'tau_w': 600, 'delta_T': 2.5,
'tau_syn_E': 5.0, 'e_rev_E': 0.0, 'tau_syn_I': 10.0, 'e_rev_I': -80 }
tc_parameters = ctx_parameters.copy()
tc_parameters.update({'a': 20.0, 'b': 0.0})
>>> thalamocortical_type = EIF_cond_exp_isfa_ista(**tc_parameters)
>>> cortical_type = EIF_cond_exp_isfa_ista(**ctx_parameters)
(see Model parameters and initial values for more on specifying parameter values). To see the list of parameter names
for a given cell type, use the get_parameter_names() method:
>>> IF_cond_exp.get_parameter_names()
['tau_refrac', 'cm', 'tau_syn_E', 'v_rest', 'tau_syn_I', 'tau_m', 'e_rev_E', 'i_offset', 'e_rev_I', '
while the default values for the parameters are in the default_parameters attribute:
>>> print(IF_cond_exp.default_parameters)
{'tau_refrac': 0.1, 'cm': 1.0, 'tau_syn_E': 5.0, 'v_rest': -65.0, 'tau_syn_I': 5.0, 'tau_m': 20.0, 'e
Note that what we have created here are neuron type objects. These can be regarded as templates, from which we will
construct the actual neurons in our network.
11
PyNN Documentation, Release 0.8.2
4.1.2 Populations
Since PyNN is designed for modelling networks containing many neurons, the default level of abstraction in PyNN is
not the single neuron but a population of neurons of a given type, represented by the Population class:
>>> tc_cells = Population(100, thalamocortical_type)
>>> ctx_cells = Population(500, cortical_type)
To create a Population, we need to specify at minimum the number of neurons and the cell type. Three additional
arguments may optionally be specified:
• the spatial structure of the population;
• initial values for the neuron state variables;
• a label.
>>>
>>>
...
...
...
>>>
>>>
>>>
...
...
...
from pyNN.space import Grid2D, RandomStructure, Sphere
tc_cells = Population(100, thalamocortical_type,
structure=RandomStructure(boundary=Sphere(radius=200.0)),
initial_values={'v': -70.0},
label="Thalamocortical neurons")
from pyNN.random import RandomDistribution
v_init = RandomDistribution('uniform', (-70.0, -60.0))
ctx_cells = Population(500, cortical_type,
structure=Grid2D(dx=10.0, dy=10.0),
initial_values={'v': v_init},
label="Cortical neurons")
(see Representing spatial structure and calculating distances for more detail on spatial structure and Model parameters
and initial values for more on specifying initial values.)
For backwards compatibility and for ease of transitioning from other simulator languages, the create() function is
available as an alias for Population. The following two lines are equivalent:
>>> cells = create(my_cell_type, n=100)
>>> cells = Population(100, my_cell_type)
(Note the different argument order).
4.1.3 Views
It is common to work with only a subset of the neurons in a Population - to modify their parameters, make
connections or record from them. Any subset of neurons in a population may be addressed using the usual Python
indexing and slicing notation, for example:
>>>
>>>
>>>
>>>
id =
view
view
view
ctx_cells[47]
= ctx_cells[:80]
= ctx_cells[::2]
= ctx_cells[45, 91, 7]
#
#
#
#
the 48th neuron in a Population
the first eighty neurons
every second neuron
a specific set of neurons
It is also possible to address a random sample of neurons within a population using the sample() method:
>>> view = ctx_cells.sample(50, rng=NumpyRNG(seed=6538))
# select 50 neurons at random
In the first of these examples, the object that is returned is an ID object, representing a single neuron. ID objects are
discussed below.
12
Chapter 4. Building networks
PyNN Documentation, Release 0.8.2
In all of these examples except the first, the object that is returned is a PopulationView. A PopulationView
holds references to a subset of neurons in a Population, which means that any changes in the view are also reflected
in the real population (and vice versa).
PopulationView objects behave in most ways as real Population objects; notably, they can be used in a
Projection (see Building networks: connections) and combined with other Population or PopulationView
objects to create an Assembly.
The parent attribute of a PopulationView has a reference to the Population that is being viewed, and the
mask attribute contains the indices of the neurons that are in the view.
>>> view.parent.label
'Cortical neurons'
>>> view.mask
array([150, 181, 53,
122, 143, 486,
188, 63, 129,
147, 72, 32,
149,
467,
416,
261,
496,
492,
307,
193,
499, 240, 444, 13, 100, 28, 19, 101,
406, 90, 136, 173,
8, 341,
5, 348,
298, 60, 180, 382, 47, 484, 370, 223,
249, 212, 58, 87, 86, 456])
4.1.4 Assemblies
As discussed above, a Population is a homogeneous collection of neurons, in the sense that all neurons have the
same cell type. An Assembly is an aggregate of Population and PopulationView objects, and as such can
represent a heterogeneous collection of neurons, of multiple cell types.
An Assembly can be created by adding together Population and PopulationView objects:
>>> all_cells = tc_cells + ctx_cells
>>> cells_for_plotting = tc_cells[:10] + ctx_cells[:50]
or by using the Assembly constructor:
>>> all_cells = Assembly(tc_cells, ctx_cells)
An assembly behaves in most ways like a Population, e.g. for setting and retrieving parameters, specifying which
neurons to record from, etc. It can also be specified as the source or target of a Projection. In this case, all the
neurons in the component populations are treated as identical for the purposes of the connection algorithm (note that
if the post-synaptic receptor type is specified (with the receptor_type argument), an Exception will be raised if
not all component neuron types possess that receptor type).
Individual populations within an Assembly may be accessed via their labels, e.g.:
>>> all_cells.get_population("Thalamocortical neurons")
Population(100, EIF_cond_exp_isfa_ista(<parameters>), structure=RandomStructure(origin=(0.0, 0.0, 0.0
Iterating over an assembly returns individual IDs, ordered by population. Similarly, the size attribute of an
Assembly gives the total number of neurons it contains. To iterate over or count populations, use the populations
attribute:
>>> for p in all_cells.populations:
...
print("%-23s %4d %s" % (p.label, p.size, p.celltype.__class__.__name__))
Thalamocortical neurons 100 EIF_cond_exp_isfa_ista
Cortical neurons
500 EIF_cond_exp_isfa_ista
4.1.5 Inspecting and modifying parameter values and initial conditions
Although both parameter values and initial conditions may be specified when creating a Population (and this is
generally the most efficient place to do it), it is also possible to modify them later.
4.1. Building networks: neurons
13
PyNN Documentation, Release 0.8.2
The get() method of Population, PopulationView and Assembly returns the current value(s) of one or
more parameters:
>>> ctx_cells.get('tau_m')
20.0
>>> all_cells[0:10].get('v_reset')
-60.0
If the parameter is homogeneous across the group, a single number will be returned, otherwise get() will return a
NumPy array containing the parameter values for all neurons:
>>> ctx_cells.get('tau_refrac')
array([ 2.64655001, 2.15914942,
2.53500179, ...
It is also possible to ask for multiple parameters at once, in which case a list of values in the same order as the list of
parameter names will be returned.
>>> ctx_cells.get(['tau_m', 'cm'])
[20.0, 0.25]
When running a distributed simulation using MPI, get() will by default return values for only those neurons that
exist on the current MPI node. To get the values for all neurons, use get(parameter_name, gather=True).
To modify parameter values, use the set() method. To set the same value for all neurons, pass a single number as
the parameter value:
>>> ctx_cells.set(a=2.0, b=0.2)
To set different values for different neurons there are several options - see Model parameters and initial values for
more details.
To modify the initial values of model variables, use the initialize() method:
>>> ctx_cells.initialize(v=RandomDistribution('normal', (-65.0, 2.0)),
...
w=0.0)
The default initial values may be inspected as follows:
>>> ctx_cells.celltype.default_initial_values
{'gsyn_exc': 0.0, 'gsyn_inh': 0.0, 'w': 0.0, 'v': -70.6}
4.1.6 Injecting current into neurons
Static or time-varying currents may be injected into neurons using either the inject_into() method of the
CurrentSource:
>>> pulse = DCSource(amplitude=0.5, start=20.0, stop=80.0)
>>> pulse.inject_into(tc_cells)
or the inject() method of the Population, PopulationView or Assembly:
>>>
>>>
>>>
>>>
>>>
import numpy
times = numpy.arange(0.0, 100.0, 1.0)
amplitudes = 0.1*numpy.sin(times*numpy.pi/100.0)
sine_wave = StepCurrentSource(times=times, amplitudes=amplitudes)
ctx_cells[80:90].inject(sine_wave)
See Injecting current for more about injecting currents.
14
Chapter 4. Building networks
PyNN Documentation, Release 0.8.2
4.1.7 Recording variables and retrieving recorded data
Just as each cell type has a well-defined set of parameters (whose values are constant over time), so it has a welldefined set of state variables, such as the membrane potential, whose values change over the course of a simulation.
The recordable attribute of a CellType class contains a list of these variables, as well as the ‘spikes’ variable,
which is used to record the times of action potentials:
>>> ctx_cells.celltype.recordable
['spikes', 'v', 'w', 'gsyn_exc', 'gsyn_inh']
The record() method specifies which variables should be recorded:
>>> all_cells.record('spikes')
>>> ctx_cells.sample(10).record(('v', 'w')) #, sampling_interval=0.2)
Note that the sampling interval must be an integer multiple of the simulation time step (except for simulators which
allow use of variable time-step integration methods).
Todo
discuss specifying filename in record()
At the end of a simulation, the recorded data can be retrieved using the get_data() method:
>>> t = run(0.2)
>>> data_block = all_cells.get_data()
or written to file using write_data():
>>>
>>>
>>>
>>>
from neo.io import NeoHdf5IO
h5file = NeoHdf5IO("my_data.h5")
ctx_cells.write_data(h5file)
h5file.close()
get_data() returns a Neo Block object.
For more information on Neo see the documentation at
http://packages.python.org/neo. Here, it will suffice to note that a Block is the top-level container, and contains
one or more Segments. Each Segment is a container for data that share a common time basis, and can contain lists
of AnalogSignal, AnalogSignalArray and SpikeTrain objects. These data objects inherit from NumPy
array, and so can be treated in further processing (analysis, visualization, etc.) in exactly the same way as plain arrays,
but in addition they carry metadata about units, sampling interval, etc.
write_data() also makes use of Neo, and allows writing to any of the several output file formats supported by Neo.
Note that as a short-cut, you can just give a filename to write_data(); the output format will then be determined
based on the filename extension (‘.h5’ for HDF5, ‘.txt’ for ASCII, etc.) if possible, otherwise the default file format
(determined by the value of pyNN.recording.DEFAULT_FILE_FORMAT) will be used.
For more details, see Data handling.
4.1.8 Working with individual neurons
Although it is usually more convenient and more efficient to work with populations of neurons, it is occasionally
convienient to work with individual neurons, represented as an ID object:
>>> tc_cells[47]
709
4.1. Building networks: neurons
15
PyNN Documentation, Release 0.8.2
For the simulator backends shipped with PyNN, the ID class is a subclass of int, and in the case of NEURON and
NEST matches the global ID (gid) used internally by the simulator. There is no requirement that IDs be integers,
however, nor that they have the same value across different simulators.
The parent attribute contains a reference to the parent population:
>>> a_cell = tc_cells[47]
>>> a_cell.parent.label
'Thalamocortical neurons'
To recover the index of a neuron within its parent given the ID, use Population.id_to_index(), e.g.:
>>> tc_cells.id_to_index(a_cell)
47
The ID object allows direct access to the parameters of individual neurons, e.g.:
>>> a_cell.tau_m
20.0
To change several parameters at once for a single neuron, use the set_parameters() method:
>>> a_cell.set_parameters(tau_m=10.0, cm=0.5)
>>> a_cell.tau_m
10.0
>>> a_cell.cm
0.5
4.2 Building networks: connections
Conceptually, a synapse consists of a pre-synaptic structure, the synaptic cleft, and a post-synaptic structure. In
PyNN, the temporal dynamics of the post-synaptic response are handled by the post-synaptic neuron model (see Cell
types). The size of the post-synaptic response (the “synaptic weight”), the temporal dynamics of the weight (synaptic
plasticity) and the connection delay are handled by synapse models.
At the time of writing, most neuronal network models do not explicitly model the axon. Rather, the time for propagation of the action potential from soma/initial segment to axon terminal is added to the synaptic transmission time to
give a composite delay, referred to as “synaptic delay” in this documentation. For point neuron models, which do not
include an explicit model of the dendrite, the time for transmission of the post-synaptic potential to the soma may also
be considered as being included in the composite synaptic delay.
At a minimum, therefore, a synaptic connection in PyNN has two attributes: “weight” and “delay”, which are interpreted as described above. Where the weight has its own dynamics, a connection may have more attributes: the
plasticity model and its parameters.
Note: Currently, PyNN supports only chemical synapses, not electrical synapses. If the underlying simulator supports
electrical synapses, it is still possible to use them in a PyNN model, but this will not be simulator-independent.
Note: Currently, PyNN does not support stochastic synapses. If you would like to have support for this, or any other
feature, please make a feature request.
16
Chapter 4. Building networks
PyNN Documentation, Release 0.8.2
4.2.1 Synapse types
Analogously to neuron models, the system of equations that defines a synapse model is encapsulated in a
SynapseType class. PyNN provides a library of “standard” synapse types (see Standard models) which work
the same across all backend simulators.
Fixed synaptic weight
The simplest, and default synapse type in PyNN has constant synaptic weight:
syn = StaticSynapse(weight=0.04, delay=0.5)
Note: weights are in microsiemens or nanoamps, depending on whether the post-synaptic mechanism implements a
change in conductance or current, and delays are in milliseconds (see Units).
It is also possible to add variability to synaptic weights and delays by specifying a RandomDistribution object
as the parameter value:
w = RandomDistribution('gamma', [10, 0.004], rng=NumpyRNG(seed=4242))
syn = StaticSynapse(weight=w, delay=0.5)
It is also possible to specify parameters as a function of the distance (typically in microns, but different scales are
possible - see Representing spatial structure and calculating distances) between pre- and post-synaptic neurons:
syn = StaticSynapse(weight=w, delay="0.2 + 0.01*d")
Short-term synaptic plasticity
PyNN currently provides one standard model for short-term synaptic plasticity (facilitation and depression):
depressing_synapse = TsodyksMarkramSynapse(weight=w, delay=0.2, U=0.5,
tau_rec=800.0, tau_facil=0.0)
tau_rec = RandomDistribution('normal', [100.0, 10.0])
facilitating_synapse = TsodyksMarkramSynapse(weight=w, delay=0.5, U=0.04,
tau_rec=tau_rec)
Spike-timing-dependent plasticity
STDP models are specified in a slightly different way than other standard models: an STDP synapse type is constructed
from separate weight-dependence and timing-dependence components, e.g.:
stdp = STDPMechanism(
weight=0.02, # this is the initial value of the weight
delay="0.2 + 0.01*d",
timing_dependence=SpikePairRule(tau_plus=20.0, tau_minus=20.0,
A_plus=0.01, A_minus=0.012),
weight_dependence=AdditiveWeightDependence(w_min=0, w_max=0.04))
Note that not all simulators will support all possible combinations of synaptic plasticity components.
4.2. Building networks: connections
17
PyNN Documentation, Release 0.8.2
4.2.2 Connection algorithms
In PyNN, each different algorithm that can be used to determine which pre-synaptic neurons are connected to which
post-synaptic neurons (also called a “connection method” or “wiring method”) is encapsulated in a separate class.
Note: for those interested in design patterns, this is an example of the Strategy Pattern
Each such class inherits from a base class, Connector, and must implement a connect() method which takes a
Projection object (see below) as its single argument.
PyNN’s library of connection algorithms currently contains the following classes:
All-to-all connections
Each neuron in the pre-synaptic population is connected to every neuron in the post-synaptic population. (In this
section, the term “population” should be understood as referring to any of the following: a Population, a
PopulationView, or an Assembly object.)
The AllToAllConnector constructor has one optional argument, allow_self_connections, for use when
connecting a population to itself. By default it is True, but if a neuron should not connect to itself, set it to False,
e.g.:
connector = AllToAllConnector(allow_self_connections=False)
# no autapses
One-to-one connections
Use of the OneToOneConnector requires that the pre- and post-synaptic populations have the same size. The
neuron with index i in the pre-synaptic population is then connected to the neuron with index i in the post-synaptic
population.
connector = OneToOneConnector()
Trying to connect two populations with different sizes will raise an Exception.
Connecting neurons with a fixed probability
With the FixedProbabilityConnector method, each possible connection between all pre-synaptic neurons
and all post-synaptic neurons is created with probability p_connect:
connector = FixedProbabilityConnector(p_connect=0.2)
Connecting neurons with a position-dependent probability
The connection probability can also depend on the positions of the pre- and post-synaptic neurons.
With the DistanceDependentProbabilityConnector, the connection probability depends on the distance
between the two neurons.
The constructor requires a string d_expression, which should be a distance expression, as described above for
delays, but returning a probability (a value between 0 and 1):
18
Chapter 4. Building networks
PyNN Documentation, Release 0.8.2
DDPC = DistanceDependentProbabilityConnector
connector = DDPC("exp(-d)")
connector = DDPC("d<3")
The first example connects neurons with an exponentially-decaying probability. The second example connects each
neuron to all its neighbours within a range of 3 units (typically interpreted as µm, but this is up to the individual user).
Note that boolean values True and False are automatically converted to numerical values 1.0 and 0.0.
Calculation of distance may be controlled by specifying a Space object, passed to the Projection constructor (see
below).
For
a
more
general
dependence
of
connection
probability
on
position,
use
the
IndexBasedProbabilityConnector, which expects a function of the indices, i and j, of the pre- and
post-synaptic neurons. The function should return the probability of creating that connection.
Divergent/fan-out connections
The FixedNumberPostConnector connects each pre-synaptic neuron to exactly n post-synaptic neurons chosen
at random:
connector = FixedNumberPostConnector(n=30)
If n is less than the size of the post-synaptic population, there are no multiple connections, i.e., no instances of the
same pair of neurons being multiply connected. If n is greater than the size of the pre-synaptic population, all possible
single connections are made before starting to add duplicate connections.
The number of post-synaptic neurons n can be fixed, or can be chosen at random from a RandomDistribution
object, e.g.:
distr_npost = RandomDistribution(distribution='binomial', n=100, p=0.3)
connector = FixedNumberPostConnector(n=distr_npost)
Convergent/fan-in connections
The FixedNumberPreConnector has the same arguments as FixedNumberPostConnector, but of course
it connects each post-synaptic neuron to n pre-synaptic neurons, e.g.:
connector = FixedNumberPreConnector(5)
distr_npre = RandomDistribution(distribution='poisson', lambda_=5)
connector = FixedNumberPreConnector(distr_npre)
Creating a small-world network
Todo
Pierre to write this bit?
Using the Connection Set Algebra
The Connection Set Algebra (Djurfeldt, 2012) is a sophisticated system that allows elaborate connectivity patterns to be
constructed using a concise syntax. Using the CSA requires the Python csa module to be installed (see Installation).
The details of constructing a connection set are beyond the scope of this manual. We give here a simple example.
4.2. Building networks: connections
19
PyNN Documentation, Release 0.8.2
import csa
cset = csa.full - csa.oneToOne
connector = CSAConnector(cset)
csa.full represents all-to-all connections, while csa.oneToOne represents the connection of pre-synaptic neuron i to post-synaptic neuron i. By subtracting the second from the first, the connection rule is “all-to-all, except where
the neurons have the same index”. If the pre- and post-synaptic populations are the same population, this is equivalent
to AllToAllConnector(allow_self_connections=False).
Todo
explain that weights and delays can either be specified within the connection set or within the synapse type.
Specifying a list of connections
Specific connection patterns not covered by the methods above can be obtained by specifying an explicit list of presynaptic and post-synaptic neuron indices. Optionally, the list can contain synaptic properties such as weights, delays,
or the parameters for plasticity rules. Example:
connections = [
(0, 0, 0.0, 0.1),
(0, 1, 0.0, 0.1),
(0, 2, 0.0, 0.1),
(1, 5, 0.0, 0.1)
]
connector = FromListConnector(connections, column_names=["weight", "delay"])
Any synaptic parameters not given in the list are determined from the synapse type. Parameters given in the list always
override the values from the synapse type.
Reading connection patterns to/from a file
Connection patterns can be read in from a text file. The file should contain a header specifying which parameter is in
which column, e.g.:
# columns = ["i", "j", "weight", "delay", "U", "tau_rec"]
and then the connection data should be in columns separated by spaces. The connections are read using:
connector = FromFileConnector("connections.txt")
Specifying an explicit connection matrix
The connectivity can be specified as a boolean array, where each row represents the existence of connections from a
given pre-synaptic neuron to the post-synaptic neurons. For example:
connections = numpy.array([[0, 1, 1, 0],
[1, 1, 0, 1],
[0, 0, 1, 0]],
dtype=bool)
connector = ArrayConnector(connections)
20
Chapter 4. Building networks
PyNN Documentation, Release 0.8.2
User-defined connection algorithms
If you wish to use a specific connection/wiring algorithm not covered by the PyNN built-in ones, the options include:
• constructing a list or array of connections and using the FromListConnector or ArrayConnector class;
• using the Connection Set Algebra and the CSAConnector class;
• writing your own Connector class - see the Developers’ guide for guidance on this.
4.2.3 Projections
A Projection is a container for a set of connections between two populations of neurons, where by population we
mean one of:
• a Population object - a group of neurons all of the same type;
• a PopulationView object - part of a Population;
• a Assembly - a heterogeneous group of neurons, which may be of different types.
Creating a Projection in PyNN also creates the connections at the level of the simulator. To create a Projection
we must specify:
• the pre-synaptic population;
• the post-synaptic population;
• a connection/wiring method;
• a synapse type
Optionally, we can also specify:
• the name of the post-synaptic mechanism (e.g. ‘excitatory’, ‘NMDA’) (by default, this is ‘excitatory’);
• a label (autogenerated if not specified);
• a Space object, which determines how distances should be calculated for distance-dependent wiring schemes
or parameter values.
Here is a minimal example:
excitatory_connections = Projection(pre, post, AllToAllConnector(),
StaticSynapse(weight=0.123))
and here is a full example:
rng = NumpyRNG(seed=64754)
sparse_connectivity = FixedProbabilityConnector(0.1, rng=rng)
weight_distr = RandomDistribution('normal', [0.01, 1e-3], rng=rng)
facilitating = TsodyksMarkramSynapse(U=0.04, tau_rec=100.0, tau_facil=1000.0,
weight=weight_distr, delay=lambda d: 0.1+d/100.0)
space = Space(axes='xy')
inhibitory_connections = Projection(pre, post,
connector=sparse_connectivity,
synapse_type=facilitating,
receptor_type='inhibitory',
space=space,
label="inhibitory connections")
Note that the attribute receptor_types of all cell type classes contains a list of the possible values of
receptor_type for that cell type:
4.2. Building networks: connections
21
PyNN Documentation, Release 0.8.2
>>> post
Population(10, IF_cond_exp(<parameters>), structure=Line(y=0.0, x0=0.0, z=0.0, dx=1.0), label='popula
>>> post.celltype
IF_cond_exp(<parameters>)
>>> post.celltype.receptor_types
('excitatory', 'inhibitory')
The space argument is used to specify how to calculate distances, since we have used a distance expression to specify
the connection delay, modelling a constant axonal propagation speed.
By default, the 3D distance between cell positions is used, but the axes argument may be used to change this, i.e.:
space = Space(axes='xy')
will ignore the z-coordinate when calculating distance. Similarly, the origins of the coordinate systems of the
two populations and the relative scale of the two coordinate systems may be controlled using the offset and
scale_factor arguments to the Space constructor. This is useful when connecting brain regions that have very
different sizes but that have a topographic mapping between them, e.g. retina to LGN to V1.
In more abstract models, it is often useful to be able to avoid edge effects by specifying periodic boundary conditions,
e.g.:
space = Space(periodic_boundaries=((0,500), (0,500), None))
calculates distance on the surface of a torus of circumference 500 µm (wrap-around in the x- and y-dimensions but not
z). For more information, see Representing spatial structure and calculating distances.
Accessing weights and delays
The Projection.get() method allows the retrieval of connection attributes, such as weights and delays. Two
formats are available. ’list’ returns a list of length equal to the number of connections in the projection, ’array’
returns a 2D weight array (with NaN for non-existent connections):
>>> excitatory_connections.get('weight', format='list')[3:7]
[(3, 0, 0.123), (4, 0, 0.123), (5, 0, 0.123), (6, 0, 0.123)]
>>> inhibitory_connections.get('delay', format='array')[:3,:5]
array([[ nan,
nan,
nan,
nan, 0.14],
[ nan,
nan,
nan, 0.12, 0.13],
[ 0.12,
nan,
nan,
nan,
nan]])
To suppress the coordinates of the connection in ’list’, view, set the with_address option to False:
>>> excitatory_connections.get('weight', format='list', with_address=False)[3:7]
[0.123, 0.123, 0.123, 0.123]
As well as weight and delay, Projection.get() can also retrieve any other parameters of synapse models:
>>> inhibitory_connections.get('U', format='list')[0:4]
[(2, 0, 0.04), (6, 1, 0.04), (8, 1, 0.04), (9, 2, 0.04)]
It is also possible to retrieve the values of multiple attributes at once, as either a list of tuples or a tuple of arrays:
>>> connection_data = inhibitory_connections.get(['weight', 'delay'], format='list')
>>> for connection in connection_data[:5]:
...
src, tgt, w, d = connection
...
print("weight = %.4f delay = %4.2f" % (w, d))
weight = 0.0094 delay = 0.12
weight = 0.0113 delay = 0.15
weight = 0.0102 delay = 0.17
22
Chapter 4. Building networks
PyNN Documentation, Release 0.8.2
weight = 0.0097 delay = 0.17
weight = 0.0127 delay = 0.12
>>> weights, delays = inhibitory_connections.get(['weight', 'delay'], format='array')
>>> exists = ~numpy.isnan(weights)
>>> for w, d in zip(weights[exists].flat, delays[exists].flat)[:5]:
...
print("weight = %.4f delay = %4.2f" % (w, d))
weight = 0.0097 delay = 0.14
weight = 0.0127 delay = 0.12
weight = 0.0097 delay = 0.13
weight = 0.0094 delay = 0.18
weight = 0.0094 delay = 0.12
Note that in this last example we have filtered out the non-existent connections using numpy.isnan().
The Projection.save() method saves connection attributes to disk.
Todo
finish documenting save() method (also decide if it should be write() or save()) need to think about formats. Text,
HDF5, ...
Access to the weights and delays of individual connections is by the connections attribute, e.g.:
>>> list(inhibitory_connections.connections)[0].weight
0.0094460775218037779
>>> list(inhibitory_connections.connections)[10].weight
0.0086313719119562281
Modifying weights and delays
As noted above, weights, delays and other connection attributes can be specified on creation of a Projection, and
this is generally the most efficient time to specify them. It is also possible, however, to modify these attributes after
creation, using the set() method.
set() accepts any number of keyword arguments, where the key is the attribute name, and the value is either:
• a numeric value (all connections will be set to the same value);
• a RandomDistribution object (each connection will be set to a different value, drawn from the distribution);
• a list or NumPy array of the same length as the number of connections in the Projection;
• a generator;
• a string expressing a function of the distance between pre- and post-synaptic neurons.
Todo
clarify whether this is the number of local connections or the total number of connections.
Some examples:
>>> excitatory_connections.set(weight=0.02)
>>> excitatory_connections.set(weight=RandomDistribution('gamma', [1, 0.1]),
...
delay=0.3)
>>> inhibitory_connections.set(U=numpy.linspace(0.4, 0.6, len(inhibitory_connections)),
4.2. Building networks: connections
23
PyNN Documentation, Release 0.8.2
...
...
tau_rec=500.0,
tau_facil=0.1)
It is also possible to access the attributes of individual connections using the connections attribute of a
Projection:
>>> for c in list(inhibitory_connections.connections)[:5]:
...
c.weight *= 2
although this is almost always less efficient than using list- or array-based access.
4.3 Representing spatial structure and calculating distances
The space module contains classes for specifying the locations of neurons in space and for calculating the distances
between them.
Neuron positions can be defined either manually, using the positions attribute of a Population or using a
Structure instance which is passed to the Population constructor.
A number of different structures are available in space. It is simple to define your own Structure sub-class if you
need something that is not already provided.
The simplest structure is a grid, whether 1D, 2D or 3D, e.g.:
>>> from pyNN.space import *
>>> line = Line(dx=100.0, x0=0.0, y=200.0, z=500.0)
>>> line.generate_positions(7)
array([[
0., 100., 200., 300., 400., 500., 600.],
[ 200., 200., 200., 200., 200., 200., 200.],
[ 500., 500., 500., 500., 500., 500., 500.]])
>>> grid = Grid2D(aspect_ratio=3, dx=10.0, dy=25.0, z=-3.0)
>>> grid.generate_positions(3)
array([[ 0., 10., 20.],
[ 0.,
0.,
0.],
[ -3., -3., -3.]])
>>> grid.generate_positions(12)
array([[ 0.,
0., 10., 10., 20., 20., 30., 30., 40.,
[ 0., 25.,
0., 25.,
0., 25.,
0., 25.,
0.,
[ -3., -3., -3., -3., -3., -3., -3., -3., -3.,
40.,
25.,
-3.,
50.,
0.,
-3.,
50.],
25.],
-3.]])
Here we have specified an x:y ratio of 3, so if we ask the grid to generate positions for 3 neurons, we get a 3x1 grid,
12 neurons a 6x2 grid, 27 neurons 9x3, etc.
BY default, grid positions are filled sequentially, iterating first over the z dimension, then y, then x, but we can also fill
the grid randomly:
>>> rgrid = Grid2D(aspect_ratio=1, dx=10.0, dy=10.0, fill_order='random', rng=NumpyRNG(seed=13886))
>>> rgrid.generate_positions(9)
array([[ 10., 10., 10.,
0.,
0.,
0., 20., 20., 20.],
[ 0., 10., 20., 10., 20.,
0.,
0., 10., 20.],
[ 0.,
0.,
0.,
0.,
0.,
0.,
0.,
0.,
0.]])
The space module also provides the RandomStructure class, which distributes neurons randomly and uniformly
within a given volume:
>>> glomerulus = RandomStructure(boundary=Sphere(radius=200.0), rng=NumpyRNG(seed=34534))
>>> glomerulus.generate_positions(5)
array([[ -19.78455022,
33.21412264, -79.4314059 , 143.39033263, -63.18242977],
24
Chapter 4. Building networks
PyNN Documentation, Release 0.8.2
[ 56.17281502,
[ -78.88348228,
-23.15159309,
-3.97408513,
131.89071845,
-95.03056844,
-73.73583484,
45.13969087,
-8.86422999],
-111.67070498]])
The volume classes currently available are Sphere and Cuboid.
Defining your own Structure classes is straightforward, just inherit from BaseStructure and implement a
generate_positions() method:
class MyStructure(BaseStructure):
parameter_names = ("spam", "eggs")
def __init__(self, spam=3, eggs=1):
...
def generate_positions(self, n):
...
# must return a 3xn numpy array
To define your own Shape class for use with RandomStructure, subclass Shape and implement a sample()
method:
class Tetrahedron(Shape):
def __init__(self, side_length):
...
def sample(self, n, rng):
...
# return a nx3 numpy array.
Note: rotation of structures is currently missing, but is planned for a future release.
4.3. Representing spatial structure and calculating distances
25
PyNN Documentation, Release 0.8.2
26
Chapter 4. Building networks
CHAPTER 5
Injecting current
Current waveforms are represented in PyNN by CurrentSource classes. There are four built-in source types, and
it is straightforward to implement your own.
There are two ways to inject a current waveform into the cells of a Population, PopulationView or
Assembly: either the inject_into() method of the CurrentSource or the inject() method of the
Population, Assembly, etc.
>>> pulse = DCSource(amplitude=0.5, start=20.0, stop=80.0)
>>> pulse.inject_into(population[3:7])
>>> sine = ACSource(start=50.0, stop=450.0, amplitude=1.0, offset=1.0,
...
frequency=10.0, phase=180.0)
>>> population.inject(sine)
27
PyNN Documentation, Release 0.8.2
>>> steps = StepCurrentSource(times=[50.0, 110.0, 150.0, 210.0],
...
amplitudes=[0.4, 0.6, -0.2, 0.2])
>>> steps.inject_into(population[(6,11,27)])
>>> noise = NoisyCurrentSource(mean=1.5, stdev=1.0, start=50.0, stop=450.0, dt=1.0)
>>> population.inject(noise)
For a full description of all the built-in current source classes, see the API reference.
28
Chapter 5. Injecting current
PyNN Documentation, Release 0.8.2
Todo
write “implementing-your-own-current-source” (e.g., implement “chirp”)
29
PyNN Documentation, Release 0.8.2
30
Chapter 5. Injecting current
CHAPTER 6
Recording spikes and state variables
It is possible to record the times of action potentials, and the values of state variables, of any neuron in the network.
Recording state variables of dynamic synapse models is not yet supported.
The classes Population, PopulationView and Assembly all have a record() method, which takes either
a single variable name or a list/tuple of such names, and which sets up recording of the requested variables for all
neurons in the population:
>>> population.record(['v', 'spikes']) # record membrane potential and spikes from all neurons in the
>>> assembly.record('spikes')
# record spikes from all neurons in multiple populations
To record from only a subset of the neurons in a Population, we create a temporary PopulationView using
indexing or the sample() method and call record() on this view:
>>> population.sample(10).record('v')
# record membrane potential from 10 neu
>>> population[[0,1,2]].record(['v', 'gsyn_exc', 'gsyn_inh']) # record several variables from specifi
To find out what variable names are available for a given neuron model, inspect the recordable attribute of the
population’s celltype attribute:
>>> population.celltype
EIF_cond_alpha_isfa_ista(<parameters>)
>>> population.celltype.recordable
['spikes', 'v', 'w', 'gsyn_exc', 'gsyn_inh']
By default, variables are recorded at every time step. It is possible to record at a lower frequency using the
sampling_interval argument, e.g.:
>>> population.record(None)
# reset recording for this population
>>> population.record('v', sampling_interval=1.0)
You should ensure that the sampling interval is an integer multiple of the simulation time step. Other values may work,
but have not been tested.
Todo
document the low-level record() function
31
PyNN Documentation, Release 0.8.2
32
Chapter 6. Recording spikes and state variables
CHAPTER 7
Data handling
Todo
add a note that data handling has changed considerably since 0.7, and give link to detailed changelog.
Recorded data in PyNN is always associated with the Population or Assembly from which it was recorded.
Data may either be written to file, using the write_data() method, or retrieved as objects in memory, using
get_data().
7.1 Retrieving recorded data
Handling of recorded data in PyNN makes use of the Neo package, which provides a common Python data model for
neurophysiology data (whether real or simulated).
The get_data() method returns a Neo Block object. This is the top-level data container, which contains one or
more Segments. Each Segment is a container for data sharing a common time basis - a new Segment is added
every time the reset() function is called.
A Segment can contain lists of AnalogSignal, AnalogSignalArray and SpikeTrain objects. These data
objects inherit from NumPy’s array class, and so can be treated in further processing (analysis, visualization, etc.) in
exactly the same way as NumPy arrays, but in addition they carry metadata about units, sampling interval, etc.
Here is a complete example of recording and plotting data from a simulation:
import pyNN.neuron as sim # can of course replace `neuron` with `nest`, `brian`, etc.
import matplotlib.pyplot as plt
import numpy as np
sim.setup(timestep=0.01)
p_in = sim.Population(10, sim.SpikeSourcePoisson(rate=10.0), label="input")
p_out = sim.Population(10, sim.EIF_cond_exp_isfa_ista(), label="AdExp neurons")
syn = sim.StaticSynapse(weight=0.05)
random = sim.FixedProbabilityConnector(p_connect=0.5)
connections = sim.Projection(p_in, p_out, random, syn, receptor_type='excitatory')
p_in.record('spikes')
p_out.record('spikes')
# record spikes from all neurons
p_out[0:2].record(['v', 'w', 'gsyn_exc']) # record other variables from first two neurons
sim.run(500.0)
33
PyNN Documentation, Release 0.8.2
spikes_in = p_in.get_data()
data_out = p_out.get_data()
fig_settings = {
'lines.linewidth': 0.5,
'axes.linewidth': 0.5,
'axes.labelsize': 'small',
'legend.fontsize': 'small',
'font.size': 8
}
plt.rcParams.update(fig_settings)
plt.figure(1, figsize=(6, 8))
def plot_spiketrains(segment):
for spiketrain in segment.spiketrains:
y = np.ones_like(spiketrain) * spiketrain.annotations['source_id']
plt.plot(spiketrain, y, '.')
plt.ylabel(segment.name)
plt.setp(plt.gca().get_xticklabels(), visible=False)
def plot_signal(signal, index, colour='b'):
label = "Neuron %d" % signal.annotations['source_ids'][index]
plt.plot(signal.times, signal[:, index], colour, label=label)
plt.ylabel("%s (%s)" % (signal.name, signal.units._dimensionality.string))
plt.setp(plt.gca().get_xticklabels(), visible=False)
plt.legend()
n_panels = sum(a.shape[1] for a in data_out.segments[0].analogsignalarrays) + 2
plt.subplot(n_panels, 1, 1)
plot_spiketrains(spikes_in.segments[0])
plt.subplot(n_panels, 1, 2)
plot_spiketrains(data_out.segments[0])
panel = 3
for array in data_out.segments[0].analogsignalarrays:
for i in range(array.shape[1]):
plt.subplot(n_panels, 1, panel)
plot_signal(array, i, colour='bg'[panel % 2])
panel += 1
plt.xlabel("time (%s)" % array.times.units._dimensionality.string)
plt.setp(plt.gca().get_xticklabels(), visible=True)
plt.show()
34
Chapter 7. Data handling
PyNN Documentation, Release 0.8.2
The adoption of Neo as an output representation also makes it easier to handle data when running multiple simulations
with the same network, calling reset() between each run. In previous versions of PyNN it was necessary to retrieve
the data before every reset(), and take care of storing the resulting data. Now, each run just creates a new Neo
Segment, and PyNN takes care of storing the data until it is needed. This is illustrated in the example below.
import pyNN.neuron as sim # can of course replace `nest` with `neuron`, `brian`, etc.
import matplotlib.pyplot as plt
from quantities import nA
sim.setup()
cell = sim.Population(1, sim.HH_cond_exp())
7.1. Retrieving recorded data
35
PyNN Documentation, Release 0.8.2
step_current = sim.DCSource(start=20.0, stop=80.0)
step_current.inject_into(cell)
cell.record('v')
for amp in (-0.2, -0.1, 0.0, 0.1, 0.2):
step_current.amplitude = amp
sim.run(100.0)
sim.reset(annotations={"amplitude": amp * nA})
data = cell.get_data()
sim.end()
for segment in data.segments:
vm = segment.analogsignalarrays[0]
plt.plot(vm.times, vm,
label=str(segment.annotations["amplitude"]))
plt.legend(loc="upper left")
plt.xlabel("Time (%s)" % vm.times.units._dimensionality)
plt.ylabel("Membrane potential (%s)" % vm.units._dimensionality)
plt.show()
36
Chapter 7. Data handling
PyNN Documentation, Release 0.8.2
Note: if you still want to retrieve the data after every run you can do so: just call get_data(clear=True)
7.2 Writing data to file
Neo provides support for writing to a variety of different file formats, notably an assortment of text-based formats,
NumPy binary format, Matlab .mat files, and HDF5. To write to a given format, we create a Neo IO object and pass
it to the write_data() method:
>>> from neo.io import NeoHdf5IO
>>> io = NeoHdf5IO(filename="my_data.h5")
>>> population.write_data(io)
As a shortcut, for file formats with a well-defined file extension, it is possible to pass just the filename, and PyNN will
create the appropriate IO object for you:
>>> population.write_data("my_data.mat")
# writes to a Matlab file
By default, all the variables that were specified in the record() call will be saved to file, but it is also possible to
save only a subset of the recorded data:
>>> population.write_data(io, variables=('v', 'gsyn_exc'))
When running distributed simulations using MPI (see Running parallel simulations), by default the data is gathered
from all MPI nodes to the master node, and only saved to file on the master node. If you would prefer that each node
saves its own local subset of the data to disk separately, use gather=False:
>>> population.write_data(io, gather=False)
Saving data to a file does not delete the data from the Population object. If you wish to do so (for example to
release memory), use clear=True:
>>> population.write_data(io, clear=True)
7.3 Simple plotting
Plotting Neo data with Matplotlib, as shown above, can be rather verbose, with a lot of repetitive boilerplate code.
PyNN therefore provides a couple of classes, Figure and Panel, to make quick-and-easy plots of recorded data.
It is possible to customize the plots to some extent, but for publication-quality or highly-customized plots you should
probably use Matplotlib or some other plotting package directly.
A simple example:
from pyNN.utility.plotting import Figure, Panel
...
population.record('spikes')
population[0:2].record(('v', 'gsyn_exc'))
...
data = population.get_data().segments[0]
7.2. Writing data to file
37
PyNN Documentation, Release 0.8.2
vm = data.filter(name="v")[0]
gsyn = data.filter(name="gsyn_exc")[0]
Figure(
Panel(vm, ylabel="Membrane potential (mV)"),
Panel(gsyn, ylabel="Synaptic conductance (uS)"),
Panel(data.spiketrains, xlabel="Time (ms)", xticks=True)
).save("simulation_results.png")
7.4 Other packages for working with Neo data
A variety of software tools are available for working with Neo-format data, for example SpykeViewer and OpenElectrophy.
38
Chapter 7. Data handling
CHAPTER 8
Simulation control
8.1 Initialising the simulator
Before using any other functions or classes from PyNN, the user must call the setup() function:
>>> setup()
setup() takes various optional arguments: setting the simulation timestep (there is currently no support in the API
for variable timestep methods although native simulator code can be used to select this option where the simulator
supports it) and setting the minimum and maximum synaptic delays, e.g.:
>>> setup(timestep=0.1, min_delay=0.1, max_delay=10.0)
Calling setup() a second time resets the simulator entirely, destroying any network that may have been created in
the meantime.
Todo
add links to documentation on simulator-specific options to setup()
8.2 Getting information about the simulation state
Several functions are available for obtaining information about the simulation state:
• get_current_time() - the time within the simulation
• get_time_step() - the integration time step
• get_min_delay() - the minimum allowed synaptic delay
• get_max_delay() - the maximum allowed synaptic delay
• num_processes() - the number of MPI processes
• rank() - the MPI rank of the current node
8.3 Running a simulation
The run() function advances the simulation for a given number of milliseconds, e.g.:
39
PyNN Documentation, Release 0.8.2
>>> run(1000.0)
You can also use run_for(), which is an alias for run(). The run_until() function advances the simulation
until a given future time point, e.g.:
>>> run_until(1001.0)
>>> get_current_time()
1001.0
8.3.1 Performing operations during a run
You may wish to perform some calculation, or show some information, during a run. One way to do this is to break
the simulation into steps, and perform the operation at the end of each step, e.g.:
>>>
...
...
The
The
The
The
for i in range(4):
run_until(100.0*i)
print("The time is %g" % (100*i,))
time is 0
time is 100
time is 200
time is 300
Alternatively, PyNN can take care of breaking the simulation into steps for you. run() and run_until() each
accept an optional list of callbacks functions. Each callback should accept the current time as an argument, and return
the next time it wishes to be called.
>>> def report_time(t):
...
print("The time is %g" % t)
...
return t + 100.0
>>> run_until(300.0, callbacks=[report_time])
The time is 0
The time is 100
The time is 200
The time is 300
300.0
For simple cases, this requires a bit more code, but it is potentially much more powerful, especially if you have
complex or multiple callbacks.
8.4 Repeating a simulation
If you wish to reset network time to zero to run a new simulation with the same network (with different parameter
values, perhaps), use the reset() function. Note that this does not change the network structure, nor the choice of
which neurons to record (from previous record() calls).
8.5 Finishing up
Just as a simulation must be begun with a call to setup(), it should be ended with a call to end(). This is not
always necessary, but it is safest to always use it.
40
Chapter 8. Simulation control
CHAPTER 9
Model parameters and initial values
As was discussed in Building networks, PyNN deals with neurons, and with the synaptic connections between them,
principally at the level of groups: with Population and Assembly for neurons and Projection for connections.
Setting the parameters of neurons and connections is also done principally at the group level, either when creating
the group, or after creation using the set() method. Sometimes, all the neurons in a Population or all the
connections in a Projection should have the same value. Other times, different individual cells or connections
should have different parameter values. To handle both of these situations, parameter values may be of four different
types:
• a single number - sets the same value for all cells in the Population or connections in the Projection
• a RandomDistribution object (see Random numbers) - each item in the group will have the parameter set
to a value drawn from the distribution
• a list or 1D NumPy array - of the same size as the Population or the number of connections in the
Projection
• a function - for a Population or Assembly the function should take a single integer argument, and will be
called with the index of every neuron in the Population to return the parameter value for that neuron. For
a Projection, the function should take two integer arguments, and for every connection will be called with
the indices of the pre- and post-synaptic neurons.
9.1 Examples
9.1.1 Setting the same value for all neurons in a population
>>> p = Population(5, IF_cond_exp(tau_m=15.0))
or, equivalently:
>>> p = Population(5, IF_cond_exp())
>>> p.set(tau_m=15.0)
To set values for a subset of the population, use a view:
>>> p[0,2,4].set(tau_m=10.0)
>>> p.get('tau_m')
array([ 10., 15., 10., 15.,
10.])
41
PyNN Documentation, Release 0.8.2
9.1.2 Setting parameters to random values
>>> from pyNN.random import RandomDistribution, NumpyRNG
>>> gbar_na_distr = RandomDistribution('normal', (20.0, 2.0), rng=NumpyRNG(seed=85524))
>>> p = Population(7, HH_cond_exp(gbar_Na=gbar_na_distr))
>>> p.get('gbar_Na')
array([ 20.03132455, 20.09777627, 16.97079318, 17.44786923,
19.4928947 , 20.80321881, 19.97246906])
>>> p[0].gbar_Na
20.031324546935146
9.1.3 Setting parameters from an array
>>> import numpy as np
>>> p = Population(6, SpikeSourcePoisson(rate=np.linspace(10.0, 20.0, num=6)))
>>> p.get('rate')
array([ 10., 12., 14., 16., 18., 20.])
The array of course has to have the same size as the population:
>>> p = Population(6, SpikeSourcePoisson(rate=np.linspace(10.0, 20.0, num=7)))
ValueError
9.1.4 Using a function to calculate parameter values
>>> from numpy import sin, pi
>>> p = Population(8, IF_cond_exp(i_offset=lambda i: sin(i*pi/8)))
>>> p.get('i_offset')
array([ 0.
, 0.38268343, 0.70710678, 0.92387953, 1.
0.92387953, 0.70710678, 0.38268343])
,
9.1.5 Setting parameters as a function of spatial position
>>> from pyNN.space import Grid2D
>>> grid = Grid2D(dx=10.0, dy=10.0)
>>> p = Population(16, IF_cond_alpha(), structure=grid)
>>> def f_v_thresh(pos):
...
x, y, z = pos.T
...
return -50 + 0.5*x - 0.2*y
>>> p.set(v_thresh=lambda i: f_v_thresh(p.position_generator(i)))
>>> p.get('v_thresh').reshape((4,4))
array([[-50., -52., -54., -56.],
[-45., -47., -49., -51.],
[-40., -42., -44., -46.],
[-35., -37., -39., -41.]])
For more on spatial structure, see Representing spatial structure and calculating distances.
9.1.6 Using multiple parameter types
It is perfectly possible to use multiple different types of parameter value at the same time:
42
Chapter 9. Model parameters and initial values
PyNN Documentation, Release 0.8.2
>>>
>>>
...
...
...
...
>>>
>>>
n = 1000
parameters = {
'tau_m': RandomDistribution('uniform', (10.0, 15.0)),
'cm':
0.85,
'v_rest': lambda i: np.cos(i*pi*10/n),
'v_reset': np.linspace(-75.0, -65.0, num=n)}
p = Population(n, IF_cond_alpha(**parameters))
p.set(v_thresh=lambda i: -65 + i/n, tau_refrac=5.0)
Todo
in the above, give current source examples, and Projection examples
9.2 Time series parameters
Todo
discuss spike trains, current sources, Sequence class
9.3 Setting initial values
Todo
complete
Note: For most neuron types, the default initial value for the membrane potential is the same as the default value
for the resting membrane potential parameter. However, be aware that changing the value of the resting membrane
potential will not automatically change the initial value.
9.2. Time series parameters
43
PyNN Documentation, Release 0.8.2
44
Chapter 9. Model parameters and initial values
CHAPTER 10
Random numbers
There are four considerations for random number generation and consumption in PyNN:
Reproducibility: When comparing simulations with different backends, we may wish to ensure that all
backends use the same sequence of random numbers so that the only differences between simulations
arise from the numerics of the simulators.
Performance: All simulators have their own built-in facilities for random number generation, and it may
be faster to use these than to use random numbers generated by PyNN.
Distributed simulations: When distributing simulations across multiple processors using MPI, we may
wish to ensure that the sequence of random numbers is independent of the number of computation
nodes.
Quality: Different models have different requirements for the quality of the (pseudo-)random number
generator used. For models that are not strongly dependent on this, we may wish to use a generator
that is faster but has lower-quality. For models that are highly sensitive, a slower but higher-quality
generator may be desired.
Because of these considerations, PyNN aims to provide a great deal of flexibility in specifying random number generation for those who need it, while hiding the details entirely for those who do not.
10.1 RNG classes
All functions and methods in the PyNN API that can make use of random numbers have an optional rng argument,
which should be an instance of a subclass of pyNN.random.AbstractRNG. PyNN provides three such sub-classes:
NumpyRNG: Uses the numpy.random.RandomState class (Mersenne Twister).
GSLRNG: Uses the GNU Scientific Library random number generators.
NativeRNG: Signals that the simulator’s own built-in RNG should be used.
If you wish to use your own random number generator, it is reasonably straightforward to do so: see Random numbers
in the API reference.
Note: If the rng argument is not supplied (or is None), then the method or function creates a new NumpyRNG for its
own use.
All RNG classes accept a seed argument, and a parallel_safe argument. The latter is True by default, and ensures that
the simulation results will not depend on the number of MPI nodes in a distributed simulation. This independence
45
PyNN Documentation, Release 0.8.2
can be computationally costly, however, so it is possible to set parallel_safe=False, accepting that the results will be
dependent on the number of nodes, in order to get better performace.
Note: parallel_safe may or may not have any effect when using a NativeRNG, depending on the simulator.
10.1.1 The next() method
Apart from the constructor, RNG classes have only one important method: next(), which returns a NumPy array
containing random numbers from the requested distribution:
>>> rng = NumpyRNG(seed=824756)
>>> rng.next(5, 'normal', {'mu': 1.0, 'sigma':
array([ 0.65866423, 0.87500017, 0.90755753,
>>> rng = GSLRNG(seed=824756, type='ranlxd2')
>>> rng.next(5, 'normal', {'mu': 1.0, 'sigma':
array([ 0.61104097, 0.83086026, 0.87072741,
0.2})
0.93793779, 0.94839735])
# RANLUX algorithm of Luescher
0.2})
0.7513628 , 1.12875371])
In versions of PyNN prior to 0.8, distribution names and parameterisations were not standardized: e.g. GSLRNG
needed ‘gaussian’ rather than ‘normal’. As of PyNN 0.8, the following standardized names are used:
Name
binomial
gamma
exponential
lognormal
normal
normal_clipped
normal_clipped_to_boundary
poisson
uniform
uniform_int
vonmises
Parameters
n, p
k, theta
beta
mu, sigma
mu, sigma
mu, sigma, low, high
mu, sigma, low, high
lambda
low, high
low, high
mu, kappa
Comments
Values outside (low, high) are redrawn
Values below/above low/high are set to low/high
10.2 The RandomDistribution class
The RandomDistribution class encapsulates a choice of random number generator and a choice of distribution,
so that its next() method requires only the number of values required as argument:
>>> gamma = RandomDistribution('gamma', (2.0, 0.3), rng=NumpyRNG(seed=72386))
>>> gamma.next(5)
array([ 0.4325809 , 0.12952503, 1.58510406, 0.81182457, 0.07577787])
You can alternatively provide parameter names as keyword arguments, e.g.:
>>> gamma = RandomDistribution('gamma', k=2.0, theta=0.3, rng=NumpyRNG(seed=72386))
Note that next() called without any arguments returns a single number, not an array:
>>> gamma.next()
0.52020946027308368
>>> gamma.next(1)
array([ 0.4863944])
46
Chapter 10. Random numbers
PyNN Documentation, Release 0.8.2
Note: the apparent difference in precision between the single number and the array is not real: NumPy only displays
a limited number of digits but the numbers in the array have full precision.
10.2. The RandomDistribution class
47
PyNN Documentation, Release 0.8.2
48
Chapter 10. Random numbers
CHAPTER 11
Backends
The PyNN API provides a uniform interface to different simulators, but nevertheless each simulator has features that
are not available in other simulators, and we aim to make these features accessible, as much as possible, from PyNN.
For each simulator backend, this section presents the configuration options specific to that backend and explains how
to use “native” neuron and synapse models within the PyNN framework.
11.1 NEURON
11.1.1 Configuration options
Adaptive time step integration
The default integration method used by the pyNN.neuron backend uses a fixed time step, specified by the timestep
argument to the setup() function.
NEURON also supports use of variable time step methods, which can improve simulation speed:
setup(use_cvode=True)
If using cvode, there are two more optional parameters:
setup(cvode=True,
rtol=0.001, # specify relative error tolerance
atol=1e-4) # specify absolute error tolerance
If not specified, the default values are rtol = 0 and atol = 0.001. For full details, see the CVode documentation
Todo
native_rng_baseseed is added to MPI.rank to form seed for SpikeSourcePoisson, etc., but I think it would be better to
add a seed parameter to SpikeSourcePoisson
Todo
Population.get_data() does not yet handle cvode properly.
49
PyNN Documentation, Release 0.8.2
11.1.2 Using native cell models
A native NEURON cell model is described using a Python class (which may wrap a Hoc template). For this class to
work with PyNN, there are a small number of requirements:
• the __init__() method should take just **parameters as its argument.
• instances should have attributes:
– source: a reference to the membrane potential which will be monitored for spike emission, e.g.
self.soma(0.5)._ref_v
– source_section: the Hoc Section in which source is located.
– parameter_names: a tuple of the names of attributes/properties of the class that correspond to parameters of the model.
– traces: an empty dict, used for recording.
– recording_time: should be False initially.
• there must be a memb_init() method, taking no arguments.
Here is an example, which uses the nrnutils package for conciseness:
from nrnutils import Mechanism, Section
class SimpleNeuron(object):
def __init__(self, **parameters):
hh = Mechanism('hh', gl=parameters['g_leak'], el=-65,
gnabar=parameters['gnabar'], gkbar=parameters['gkbar'])
self.soma = Section(L=30, diam=30, mechanisms=[hh])
self.soma.add_synapse('ampa', 'Exp2Syn', e=0.0, tau1=0.1, tau2=5.0)
# needed for PyNN
self.source_section = self.soma
self.source = self.soma(0.5)._ref_v
self.parameter_names = ('g_leak', 'gnabar', 'gkbar')
self.traces = {}
self.recording_time = False
def _set_gnabar(self, value):
for seg in self.soma:
seg.hh.gnabar = value
def _get_gnabar(self):
return self.soma(0.5).hh.gnabar
gnabar = property(fget=_get_gnabar, fset=_set_gnabar)
# ... gkbar and g_leak properties defined similarly
def memb_init(self):
for seg in self.soma:
seg.v = self.v_init
For each cell model, you must also define a cell type:
from pyNN.neuron import NativeCellType
class SimpleNeuronType(NativeCellType):
default_parameters = {'g_leak': 0.0002, 'gkbar': 0.036, 'gnabar': 0.12}
default_initial_values = {'v': -65.0}
50
Chapter 11. Backends
PyNN Documentation, Release 0.8.2
recordable = ['soma(0.5).v', 'soma(0.5).ina']
units = {'soma(0.5).v' : 'mV', 'soma(0.5).ina': 'nA'}
receptor_types = ['soma.ampa']
model = SimpleNeuron
The requirement to explicitly list all variables you might wish to record in the recordable attribute is a temporary
inconvenience, which will be removed in a future version.
It is now straightforward to use this cell type in PyNN:
from pyNN.neuron import setup, run, Population, Projection, AllToAllConnector, StaticSynapse
setup()
p1 = Population(10, SimpleNeuronType(g_leak=0.0003))
p1.record('soma(0.5).ina')
syn = StaticSynapse(weight=0.01, delay=0.5)
prj = Projection(p1, p1, AllToAllConnector(), syn, receptor_type='soma.ampa')
run(100.0)
output = p1.get_data()
If your model relies on other NMODL mechanisms, call the load_mechanisms() function with the path to the
directory containing the .mod files.
11.2 NEST
11.2.1 Configuration options
Continuous time spiking
In traditional simulation schemes spikes are constrained to an equidistant time grid. However, for some neuron models,
NEST has the capability to represent spikes in continuous time.
At setup the user can choose the continuous time scheme
setup(spike_precision='off_grid')
or the conventional grid-constrained scheme
setup(spike_precision='on_grid')
where ‘on_grid’ is the default.
Todo
consider changing the default to off_grid, since PyNN’s defaults are supposed to prefer accuracy/comparability over
performance.
As of NEST 2.0.0, the following PyNN standard models have an off-grid implementation: IF_curr_alpha (in
preparation), IF_curr_exp, EIF_cond_alpha_isfa_ista.
Todo
check the accuracy of the above list before release
11.2. NEST
51
PyNN Documentation, Release 0.8.2
Todo
add a list of native NEST models with off-grid capability
Here is an example showing how to specify the option in a PyNN script and an illustration of the different outcomes:
import numpy
from pyNN.nest import *
import matplotlib.pyplot as plt
def test_sim(on_or_off_grid, sim_time):
setup(timestep=1.0, min_delay=1.0, max_delay=1.0, spike_precision=on_or_off_grid)
src = Population(1, SpikeSourceArray(spike_times=[0.5]))
cm = 250.0
tau_m = 10.0
tau_syn_E = 1.0
weight = cm / tau_m * numpy.power(tau_syn_E / tau_m, -tau_m / (tau_m - tau_syn_E)) * 20.5
nrn = Population(1, IF_curr_exp(cm=cm, tau_m=tau_m, tau_syn_E=tau_syn_E,
tau_refrac=2.0, v_thresh=20.0, v_rest=0.0,
v_reset=0.0, i_offset=0.0))
nrn.initialize(v=0.0)
prj = Projection(src, nrn, OneToOneConnector(), StaticSynapse(weight=weight))
nrn.record('v')
run(sim_time)
return nrn.get_data().segments[0].analogsignalarrays[0]
sim_time = 10.0
off = test_sim('off_grid', sim_time)
on = test_sim('on_grid', sim_time)
def plot_data(pos, on, off, ylim, with_legend=False):
ax = plt.subplot(1, 2, pos)
ax.plot(off.times, off, color='0.7', linewidth=7, label='off-grid')
ax.plot(on.times, on, 'k', label='on-grid')
ax.set_ylim(*ylim)
ax.set_xlim(0, 9)
ax.set_xlabel('time [ms]')
ax.set_ylabel('V [mV]')
if with_legend:
plt.legend()
plot_data(1, on, off, (-0.5, 21), with_legend=True)
plot_data(2, on, off, (-0.05, 2.1))
plt.show()
52
Chapter 11. Backends
PyNN Documentation, Release 0.8.2
The gray curve shows the membrane potential excursion in response to an input spike arriving at the neuron at t =
1.5 ms (left panel, the right panel shows an enlargement at low voltages). The amplitude of the post-current has an
unrealistically high value such that the threshold voltage for spike generation is crossed. The membrane potential is
recorded in intervals of 1 ms. Therefore the first non-zero value is measured at t = 2 ms. The threshold is crossed
somewhere in the interval (3 ms, 4 ms], resulting in a voltage of 0 at t = 4 ms. The membrane potential is clamped
to 0 for 2 ms, the refractory period. Therefore, the neuron recovers from refractoriness somewhere in the interval (5
ms, 6 ms] and the next non-zero voltage is observed at t = 6 ms. The black curve shows the results of the same model
now integrated with a grid constrained simulation scheme with a computation step size of 1 ms. The input spike is
mapped to the next grid position and therefore arrives at t = 2 ms. The first non-zero voltage is observed at t = 3 ms.
The output spike is emitted at t = 4 ms and this is the time at which the membrane potential is reset. Consequently, the
model neuron returns from refractoriness at exactly t = 6 ms. The next non-zero membrane potential value is observed
at t = 7 ms.
The following publication describes how the continuous time mode is implemented in NEST and compares the performance of different approaches:
Hanuschkin A, Kunkel S, Helias M, Morrison A and Diesmann M (2010) A general and efficient method
for incorporating precise spike times in globally time-driven simulations. Front. Neuroinform. 4:113.
doi:10.3389/fninf.2010.00113
11.2.2 Using native cell models
To use a NEST neuron model with PyNN, we wrap the NEST model with a PyNN NativeCellType class, e.g.:
11.2. NEST
53
PyNN Documentation, Release 0.8.2
>>>
>>>
0
>>>
>>>
>>>
>>>
from pyNN.nest import native_cell_type, Population, run, setup
setup()
ht_neuron = native_cell_type('ht_neuron')
poisson = native_cell_type('poisson_generator')
p1 = Population(10, ht_neuron(Tau_m=20.0))
p2 = Population(1, poisson(rate=200.0))
We can now initialize state variables, set/get parameter values, and record from these neurons as from standard cells:
>>> p1.get('Tau_m')
20.0
>>> p1.get('Tau_theta')
2.0
>>> p1.get('C_m')
Traceback (most recent call last):
...
NonExistentParameterError: C_m (valid parameters for ht_neuron are:
AMPA_E_rev, AMPA_Tau_1, AMPA_Tau_2, AMPA_g_peak, E_K, E_Na, GABA_A_E_rev,
GABA_A_Tau_1, GABA_A_Tau_2, GABA_A_g_peak, GABA_B_E_rev, GABA_B_Tau_1,
GABA_B_Tau_2, GABA_B_g_peak, KNa_E_rev, KNa_g_peak, NMDA_E_rev, NMDA_Sact,
NMDA_Tau_1, NMDA_Tau_2, NMDA_Vact, NMDA_g_peak, NaP_E_rev, NaP_g_peak,
T_E_rev, T_g_peak, Tau_m, Tau_spike, Tau_theta, Theta_eq, g_KL, g_NaL,
h_E_rev, h_g_peak, spike_duration)
>>> p1.initialize(V_m=-70.0, Theta=-50.0)
>>> p1.record('V_m')
>>> run(250.0)
250.0
>>> output = p1.get_data()
To connect populations of native cells, you need to know the available synaptic receptor types:
>>> ht_neuron.receptor_types
['NMDA', 'AMPA', 'GABA_A', 'GABA_B']
>>> from pyNN.nest import Projection, AllToAllConnector
>>> connector = AllToAllConnector()
>>> prj_ampa = Projection(p2, p1, connector, receptor_type='AMPA')
>>> prj_nmda = Projection(p2, p1, connector, receptor_type='NMDA')
11.2.3 Using native synaptic plasticity models
To use a NEST STDP model with PyNN, we use the native_synapse_type() function:
>>> from pyNN.nest import native_synapse_type
>>> stdp = native_synapse_type("stdp_synapse")(**{"Wmax": 50.0, "lambda": 0.015})
>>> prj_plastic = Projection(p1, p1, connector, receptor_type='AMPA', synapse_type=stdp)
Common synapse properties
Some NEST synapse models (e.g. stdp_facetshw_synapse_hom) make use of common synapse properties to
conserve memory. This has the following implications for their usage in PyNN:
• Common properties can only have one homogeneous value per projection. Trying to assign heterogeneous
values will result in a ValueError.
• Common properties can currently not be retrieved using Projection.get. However, they will only deviate
from the default when changed manually.
54
Chapter 11. Backends
PyNN Documentation, Release 0.8.2
11.3 Brian
11.4 NeMo
11.5 MOOSE
11.6 NeuroML
11.7 NineML
The NineML backend is described in ../nineml
11.8 Neuromorphic hardware
11.3. Brian
55
PyNN Documentation, Release 0.8.2
56
Chapter 11. Backends
CHAPTER 12
Running parallel simulations
Where the underlying simulator supports distributed simulations, in which the computations are spread over multiple
processors using MPI (this is the case for NEURON and NEST), PyNN also supports this. To run a distributed
simulation on eight nodes, the command will be something like:
$ mpirun -np 8 -machinefile ~/mpi_hosts python myscript.py
Depending on the implementation of MPI you have, mpirun could be replaced by mpiexec or another command,
and the options may also be somewhat different.
For NEURON only, you can also run distributed simulations using nrniv instead of the python executable:
$ mpirun -np 8 -machinefile ~/mpi_hosts nrniv -python -mpi myscript.py
12.1 Additional requirements
First, make sure you have compiled the simulators you wish to use with MPI enabled. There is usually a configure flag
called something like “--with-mpi” to do this, but see the installation documentation for each simulator for details.
If you wish to use the default “gather” feature (see below), which automatically gathers output data from all the nodes
to the master node (the one on which you launched the simulation), you will need to install the mpi4py module (see
http://mpi4py.scipy.org/ for downloads and documentation). Installation is usually very straightforward, although, if
you have more than one MPI implementation installed on your system (e.g. OpenMPI and MPICH2), you must be
sure to build mpi4py with the same MPI implementation that you used to build the simulator.
12.2 Code modifications
In most cases, no modifications to your code should be necessary to run in parallel. PyNN, and the simulator, take
care of distributing the computations between nodes. Furthermore, the default settings should give results that are
independent of the number of processors used, even when using random numbers.
12.3 Gathering data to the master node
The various methods of the Population and Assembly classes that deal with accessing recorded data or writing it
to disk, such as get_data(), write_data(), etc., have an optional argument gather, which is True by default.
If gather is True, then data generated on other nodes is sent to the master node. This means that, for example,
write_data() will create only a single file, on the filesystem of the master node. If gather is False, each node
57
PyNN Documentation, Release 0.8.2
will write a file on its local filesystem. This option is often desirable if you wish to do distributed post-processing of
the data. (Don’t worry, by the way, if you are using a shared filesystem such as NFS. If gather is False then the MPI
rank is appended to the filename, so there is no chance of conflict between the different nodes).
12.4 Random number generators
In general, we expect that our results should not depend on the number of processors used to produce them. If our
simulations use random numbers in setting-up or running the network, this means that each object that uses random
numbers should receive the same sequence independent of which node it is on or how many nodes there are. PyNN
achieves this by ensuring the generator seed is the same on all nodes, and then generating as many random numbers
as would be used in the single-processor case and throwing away those that are not needed.
This obviously has a potential impact on performance, and so it is possible to turn it off by passing parallel_safe=False
as argument when creating the random number generator, e.g.:
>>> from pyNN.random import NumpyRNG
>>> rng = NumpyRNG(seed=249856, parallel_safe=False)
Now, PyNN will ensure the seed is different on each node, and will generate only as many numbers as are actually
needed on each node.
Note that the above applies only to the random number generators provided by the pyNN.random module,
not to the native RNGs used internally by each simulator. This means that, for example, you should prefer SpikeSourceArray (for which you can generate Poisson spike times using a parallel-safe RNG) to
SpikeSourcePoisson, which uses the simulator’s internal RNG, if you care about being independent of the
number of processors.
58
Chapter 12. Running parallel simulations
CHAPTER 13
Units
PyNN does not at present support explicit specification of the units of physical quantities (parameters and initial
values). Instead, the following convention is used:
Physical quantity
time
voltage
current
conductance
capacitance
firing rate
phase/angle
Units
ms
mV
nA
µS
nF
/s
deg
59
PyNN Documentation, Release 0.8.2
60
Chapter 13. Units
CHAPTER 14
Examples
14.1 A selection of Izhikevich neurons
"""
A selection of Izhikevich neurons.
Run as:
$ python Izhikevich.py <simulator>
where <simulator> is 'neuron', 'nest', etc.
"""
from numpy import arange
from pyNN.utility import get_simulator, init_logging, normalized_filename
# === Configure the simulator ================================================
61
PyNN Documentation, Release 0.8.2
sim, options = get_simulator(("--plot-figure", "Plot the simulation results to a file.", {"action": "
("--debug", "Print debugging information"))
if options.debug:
init_logging(None, debug=True)
sim.setup(timestep=0.01, min_delay=1.0)
# === Build and instrument the network =======================================
neurons = sim.Population(3, sim.Izhikevich(a=0.02, b=0.2, c=-65, d=6, i_offset=[0.014, 0.0, 0.0]))
spike_source = sim.Population(1, sim.SpikeSourceArray(spike_times=arange(10.0, 51, 1)))
connection = sim.Projection(spike_source, neurons[1:2], sim.OneToOneConnector(),
sim.StaticSynapse(weight=3.0, delay=1.0),
receptor_type='excitatory'),
electrode = sim.DCSource(start=2.0, stop=92.0, amplitude=0.014)
electrode.inject_into(neurons[2:3])
neurons.record(['v']) # , 'u'])
neurons.initialize(v=-70.0, u=-14.0)
# === Run the simulation =====================================================
sim.run(100.0)
# === Save the results, optionally plot a figure =============================
filename = normalized_filename("Results", "Izhikevich", "pkl",
options.simulator, sim.num_processes())
neurons.write_data(filename, annotations={'script_name': __file__})
if options.plot_figure:
from pyNN.utility.plotting import Figure, Panel
figure_filename = filename.replace("pkl", "png")
data = neurons.get_data().segments[0]
v = data.filter(name="v")[0]
#u = data.filter(name="u")[0]
Figure(
Panel(v, ylabel="Membrane potential (mV)", xticks=True,
xlabel="Time (ms)", yticks=True),
#Panel(u, ylabel="u variable (units?)"),
annotations="Simulated with %s" % options.simulator.upper()
).save(figure_filename)
print(figure_filename)
# === Clean up and quit ========================================================
sim.end()
62
Chapter 14. Examples
PyNN Documentation, Release 0.8.2
14.2 Injecting time-varying current into a cell
"""
Injecting time-varying current into a cell.
There are four "standard" current sources in PyNN:
-
DCSource
ACSource
StepCurrentSource
NoisyCurrentSource
Any other current waveforms can be implemented using StepCurrentSource.
Usage: current_injection.py [-h] [--plot-figure] simulator
positional arguments:
simulator
neuron, nest, brian or another backend simulator
optional arguments:
-h, --help
show this help message and exit
--plot-figure Plot the simulation results to a file
"""
from pyNN.utility import get_simulator, normalized_filename
# === Configure the simulator ================================================
sim, options = get_simulator(("--plot-figure", "Plot the simulation results to a file",
{"action": "store_true"}))
sim.setup()
# === Create four cells and inject current into each one =====================
14.2. Injecting time-varying current into a cell
63
PyNN Documentation, Release 0.8.2
cells = sim.Population(4, sim.IF_curr_exp(v_thresh=-55.0, tau_refrac=5.0, tau_m=10.0))
current_sources = [sim.DCSource(amplitude=0.5, start=50.0, stop=400.0),
sim.StepCurrentSource(times=[50.0, 210.0, 250.0, 410.0],
amplitudes=[0.4, 0.6, -0.2, 0.2]),
sim.ACSource(start=50.0, stop=450.0, amplitude=0.2,
offset=0.1, frequency=10.0, phase=180.0),
sim.NoisyCurrentSource(mean=0.5, stdev=0.2, start=50.0,
stop=450.0, dt=1.0)]
for cell, current_source in zip(cells, current_sources):
cell.inject(current_source)
filename = normalized_filename("Results", "current_injection", "pkl", options.simulator)
sim.record('v', cells, filename, annotations={'script_name': __file__})
# === Run the simulation =====================================================
sim.run(500.0)
# === Save the results, optionally plot a figure =============================
vm = cells.get_data().segments[0].filter(name="v")[0]
sim.end()
if options.plot_figure:
from pyNN.utility.plotting import Figure, Panel
from quantities import mV
figure_filename = filename.replace("pkl", "png")
Figure(
Panel(vm, y_offset=-10 * mV, xticks=True, yticks=True,
xlabel="Time (ms)", ylabel="Membrane potential (mV)",
ylim=(-96, -59)),
title="Current injection example",
annotations="Simulated with %s" % options.simulator.upper()
).save(figure_filename)
64
Chapter 14. Examples
PyNN Documentation, Release 0.8.2
14.2. Injecting time-varying current into a cell
65
PyNN Documentation, Release 0.8.2
14.3 A demonstration of the responses of different standard neuron
models to current injection
66
Chapter 14. Examples
PyNN Documentation, Release 0.8.2
"""
A demonstration of the responses of different standard neuron models to current injection.
Usage: python cell_type_demonstration.py [-h] [--plot-figure] [--debug] simulator
positional arguments:
simulator
neuron, nest, brian or another backend simulator
optional arguments:
-h, --help
show this help message and exit
--plot-figure Plot the simulation results to a file.
--debug
Print debugging information
"""
from pyNN.utility import get_simulator, init_logging, normalized_filename
# === Configure the simulator ================================================
sim, options = get_simulator(("--plot-figure", "Plot the simulation results to a file.", {"action": "
("--debug", "Print debugging information"))
if options.debug:
init_logging(None, debug=True)
sim.setup(timestep=0.01, min_delay=1.0)
# === Build and instrument the network =======================================
cuba_exp = sim.Population(1, sim.IF_curr_exp(i_offset=1.0), label="IF_curr_exp")
hh = sim.Population(1, sim.HH_cond_exp(i_offset=0.2), label="HH_cond_exp")
adexp = sim.Population(1, sim.EIF_cond_exp_isfa_ista(i_offset=1.0), label="EIF_cond_exp_isfa_ista")
adapt = sim.Population(1, sim.IF_cond_exp_gsfa_grr(i_offset=2.0), label="IF_cond_exp_gsfa_grr")
izh = sim.Population(1, sim.Izhikevich(i_offset=0.01), label="Izhikevich")
all_neurons = cuba_exp + hh + adexp + adapt + izh
all_neurons.record('v')
adexp.record('w')
izh.record('u')
# === Run the simulation =====================================================
sim.run(100.0)
# === Save the results, optionally plot a figure =============================
filename = normalized_filename("Results", "cell_type_demonstration", "pkl", options.simulator)
all_neurons.write_data(filename, annotations={'script_name': __file__})
if options.plot_figure:
from pyNN.utility.plotting import Figure, Panel
figure_filename = filename.replace("pkl", "png")
Figure(
14.3. A demonstration of the responses of different standard neuron models to current injection67
PyNN Documentation, Release 0.8.2
Panel(cuba_exp.get_data().segments[0].filter(name='v')[0],
ylabel="Membrane potential (mV)",
data_labels=[cuba_exp.label], yticks=True, ylim=(-66, -48)),
Panel(hh.get_data().segments[0].filter(name='v')[0],
ylabel="Membrane potential (mV)",
data_labels=[hh.label], yticks=True, ylim=(-100, 60)),
Panel(adexp.get_data().segments[0].filter(name='v')[0],
ylabel="Membrane potential (mV)",
data_labels=[adexp.label], yticks=True, ylim=(-75, -40)),
Panel(adexp.get_data().segments[0].filter(name='w')[0],
ylabel="w (nA)",
data_labels=[adexp.label], yticks=True, ylim=(0, 0.4)),
Panel(adapt.get_data().segments[0].filter(name='v')[0],
ylabel="Membrane potential (mV)",
data_labels=[adapt.label], yticks=True, ylim=(-75, -45)),
Panel(izh.get_data().segments[0].filter(name='v')[0],
ylabel="Membrane potential (mV)",
data_labels=[izh.label], yticks=True, ylim=(-80, 40)),
Panel(izh.get_data().segments[0].filter(name='u')[0],
xticks=True, xlabel="Time (ms)",
ylabel="u (mV/ms)",
data_labels=[izh.label], yticks=True, ylim=(-14, 0)),
title="Responses of standard neuron models to current injection",
annotations="Simulated with %s" % options.simulator.upper()
).save(figure_filename)
print(figure_filename)
# === Clean up and quit ========================================================
sim.end()
68
Chapter 14. Examples
PyNN Documentation, Release 0.8.2
14.4 An example to illustrate random number handling in PyNN
"""
An example to illustrate random number handling in PyNN
In particular, this shows the difference between "native" and Python random number generators.
If you run this script with two different simulators, e.g. NEST and NEURON,
the weight matrix created with the Python RNG will be the same for both simulations,
the weights created with the native RNG will be different in the two cases.
The potential advantage of using a native RNG is speed: for large networks, using
the `NativeRNG` class can reduce network construction time, but at the expense of
cross-simulator repeatability.
Usage: random_numbers.py [-h] [--plot-figure] [--debug DEBUG] simulator
positional arguments:
simulator
neuron, nest, brian or another backend simulator
optional arguments:
-h, --help
show this help message and exit
14.4. An example to illustrate random number handling in PyNN
69
PyNN Documentation, Release 0.8.2
--plot-figure
--debug DEBUG
plot the simulation results to a file
print debugging information
"""
import numpy
from pyNN.random import NumpyRNG, RandomDistribution
from pyNN.utility import get_simulator
# === Configure the simulator ================================================
sim, options = get_simulator(
("--plot-figure", "plot the simulation results to a file", {"action": "store_true
("--debug", "print debugging information"))
sim.setup()
# === Create random number generators ========================================
python_rng = NumpyRNG(seed=98497627)
native_rng = sim.NativeRNG(seed=87354762)
# === Define the neuron model and initial conditions =========================
cell_type = sim.IF_cond_exp(tau_m=RandomDistribution('normal', (15.0, 2.0), rng=python_rng)) # not p
v_init = RandomDistribution('uniform',
(cell_type.default_parameters['v_rest'], cell_type.default_parameters['v_
rng=python_rng) # not possible with NEST to use NativeRNG here
# === Create populations of neurons, and record from them ====================
p1 = sim.Population(10, sim.SpikeSourcePoisson(rate=100.0)) # in the current version, can't specify
p2 = sim.Population(10, cell_type, initial_values={'v': v_init})
p1.record("spikes")
p2.record("spikes")
p2.sample(3, rng=python_rng).record("v")
# can't use native RNG here
# === Create two sets of synaptic connections, one for each RNG ==============
connector_native = sim.FixedProbabilityConnector(p_connect=0.7, rng=native_rng)
connector_python = sim.FixedProbabilityConnector(p_connect=0.7, rng=python_rng)
synapse_type_native = sim.StaticSynapse(weight=RandomDistribution('gamma', k=2.0, theta=0.5, rng=nati
delay=0.5)
synapse_type_python = sim.StaticSynapse(weight=RandomDistribution('gamma', k=2.0, theta=0.5, rng=pyth
delay=0.5)
projection_native = sim.Projection(p1, p2, connector_native, synapse_type_native)
projection_python = sim.Projection(p1, p2, connector_python, synapse_type_python)
# === Print the synaptic weight matrices =====================================
weights_python = projection_python.get("weight", format="array")
weights_native = projection_native.get("weight", format="array")
print(weights_python)
70
Chapter 14. Examples
PyNN Documentation, Release 0.8.2
print(weights_native)
# === Run the simulation =====================================================
sim.run(100.0)
sim.end()
# === Optionally, plot the synaptic weight matrices ==========================
if options.plot_figure:
from pyNN.utility import normalized_filename
from pyNN.utility.plotting import Figure, Panel
filename = normalized_filename("Results", "random_numbers", "png", options.simulator)
# where there is no connection, the weight matrix contains NaN
# for plotting purposes, we replace NaN with zero.
weights_python[numpy.isnan(weights_python)] = 0
weights_native[numpy.isnan(weights_native)] = 0
Figure(
Panel(weights_python, cmap='gray_r', xlabel="Python RNG"),
Panel(weights_native, cmap='gray_r', xlabel="Native RNG"),
annotations="Simulated with %s" % options.simulator.upper()
).save(filename)
print(filename)
14.4. An example to illustrate random number handling in PyNN
71
PyNN Documentation, Release 0.8.2
14.5 Illustration of the different standard random distributions and
different random number generators
"""
Illustration of the different standard random distributions and different random number generators
"""
import
import
import
import
import
numpy
matplotlib.pyplot as plt
matplotlib.gridspec as gridspec
scipy.stats
pyNN.random as random
try:
from neuron import h
except ImportError:
have_nrn = False
else:
have_nrn = True
from pyNN.neuron.random import NativeRNG
n = 100000
nbins = 100
72
Chapter 14. Examples
PyNN Documentation, Release 0.8.2
rnglist = [random.NumpyRNG(seed=984527)]
if random.have_gsl:
rnglist.append(random.GSLRNG(seed=668454))
if have_nrn:
rnglist.append(NativeRNG(seed=321245))
cases = (
("uniform", {"low": -65, "high": -55}, (-65, -55), scipy.stats.uniform(loc=-65, scale=10)),
("gamma", {"k": 2.0, "theta": 0.5}, (0, 5), scipy.stats.gamma(2.0, loc=0.0, scale=0.5)),
("normal", {"mu": -1.0, "sigma": 0.5}, (-3, 1), scipy.stats.norm(loc=-1, scale=0.5)),
("exponential", {'beta': 10.0}, (0, 50), scipy.stats.expon(loc=0, scale=10)),
("normal_clipped", {"mu": 0.5, "sigma": 0.5, "low": 0, "high": 10}, (-0.5, 3.0), None),
)
fig = plt.figure(1)
rows = len(cases)
cols = len(rnglist)
settings = {
'lines.linewidth': 0.5,
'axes.linewidth': 0.5,
'axes.labelsize': 'small',
'axes.titlesize': 'small',
'legend.fontsize': 'small',
'font.size': 8,
'savefig.dpi': 150,
}
plt.rcParams.update(settings)
width, height = (2 * cols, 2 * rows)
fig = plt.figure(1, figsize=(width, height))
gs = gridspec.GridSpec(rows, cols)
gs.update(hspace=0.4)
for i, case in enumerate(cases):
distribution, parameters, xlim, rv = case
bins = numpy.linspace(*xlim, num=nbins)
for j, rng in enumerate(rnglist):
rd = random.RandomDistribution(distribution, rng=rng, **parameters)
values = rd.next(n)
assert values.size == n
plt.subplot(gs[i, j])
counts, bins, _ = plt.hist(values, bins, range=xlim)
plt.title("%s.%s%s" % (rng, distribution, parameters.values()))
if rv is not None:
pdf = rv.pdf(bins)
scaled_pdf = n * pdf / pdf.sum()
plt.plot(bins, scaled_pdf, 'r-')
plt.ylim(0, 1.2 * scaled_pdf.max())
plt.xlim(xlim)
plt.savefig("Results/random_distributions.png")
14.5. Illustration of the different standard random distributions and different random number
generators
73
PyNN Documentation, Release 0.8.2
74
Chapter 14. Examples
PyNN Documentation, Release 0.8.2
14.6 A very simple example of using STDP
14.6. A very simple example of using STDP
75
PyNN Documentation, Release 0.8.2
# encoding: utf8
"""
A very simple example of using STDP.
A single postsynaptic neuron fires at a constant rate. We connect several
presynaptic neurons to it, each of which fires spikes with a fixed time
lag or time advance with respect to the postsynaptic neuron.
The weights of these connections are very small, so they will not
significantly affect the firing times of the post-synaptic neuron.
We plot the amount of potentiation or depression of each synapse as a
function of the time difference.
Usage: python simple_STDP.py [-h] [--plot-figure] [--debug DEBUG] simulator
positional arguments:
simulator
neuron, nest, brian or another backend simulator
optional arguments:
-h, --help
show this help message and exit
--plot-figure Plot the simulation results to a file
--fit-curve
Calculate the best-fit curve to the weight-delta_t measurements
--debug DEBUG Print debugging information
"""
from __future__ import division
from math import exp
import numpy
import neo
from quantities import ms
from pyNN.utility import get_simulator, init_logging, normalized_filename
from pyNN.utility.plotting import DataTable
from pyNN.parameters import Sequence
# === Parameters ============================================================
firing_period = 100.0
# (ms) interval between spikes
cell_parameters = {
"tau_m": 10.0,
# (ms)
"v_thresh": -50.0,
# (mV)
"v_reset": -60.0,
# (mV)
"v_rest": -60.0,
# (mV)
"cm": 1.0,
# (nF)
"tau_refrac": firing_period / 2, # (ms) long refractory period to prevent bursting
}
n = 60
# number of synapses / number of presynaptic neurons
delta_t = 1.0
# (ms) time difference between the firing times of neighbouring neurons
t_stop = 10 * firing_period + n * delta_t
delay = 3.0
# (ms) synaptic time delay
# === Configure the simulator ===============================================
sim, options = get_simulator(("--plot-figure", "Plot the simulation results to a file", {"action": "s
("--fit-curve", "Calculate the best-fit curve to the weight-delta_t meas
("--dendritic-delay-fraction", "What fraction of the total transmission
76
Chapter 14. Examples
PyNN Documentation, Release 0.8.2
("--debug", "Print debugging information"))
if options.debug:
init_logging(None, debug=True)
sim.setup(timestep=0.01, min_delay=delay, max_delay=delay)
# === Build the network =====================================================
def build_spike_sequences(period, duration, n, delta_t):
"""
Return a spike time generator for `n` neurons (spike sources), where
all neurons fire with the same period, but neighbouring neurons have a relative
firing time difference of `delta_t`.
"""
def spike_time_gen(i):
"""Spike time generator. `i` should be an array of indices."""
return [Sequence(numpy.arange(period + j * delta_t, duration, period)) for j in (i - n // 2)]
return spike_time_gen
spike_sequence_generator = build_spike_sequences(firing_period, t_stop, n, delta_t)
# presynaptic population
p1 = sim.Population(n, sim.SpikeSourceArray(spike_times=spike_sequence_generator),
label="presynaptic")
# single postsynaptic neuron
p2 = sim.Population(1, sim.IF_cond_exp(**cell_parameters),
initial_values={"v": cell_parameters["v_reset"]}, label="postsynaptic")
# drive to the postsynaptic neuron, ensuring it fires at exact multiples of the firing period
p3 = sim.Population(1, sim.SpikeSourceArray(spike_times=numpy.arange(firing_period - delay, t_stop, f
label="driver")
# we set the initial weights to be very small, to avoid perturbing the firing times of the
# postsynaptic neurons
stdp_model = sim.STDPMechanism(
timing_dependence=sim.SpikePairRule(tau_plus=20.0, tau_minus=20.0,
A_plus=0.01, A_minus=0.012),
weight_dependence=sim.AdditiveWeightDependence(w_min=0, w_max=0.0000001),
weight=0.00000005,
delay=delay,
dendritic_delay_fraction=float(options.dendritic_delay_fraction))
connections = sim.Projection(p1, p2, sim.AllToAllConnector(), stdp_model)
# the connection weight from the driver neuron is very strong, to ensure the
# postsynaptic neuron fires at the correct times
driver_connection = sim.Projection(p3, p2, sim.OneToOneConnector(),
sim.StaticSynapse(weight=10.0, delay=delay))
# == Instrument the network =================================================
p1.record('spikes')
p2.record(['spikes', 'v'])
class WeightRecorder(object):
"""
Recording of weights is not yet built in to PyNN, so therefore we need
to construct a callback object, which reads the current weights from
14.6. A very simple example of using STDP
77
PyNN Documentation, Release 0.8.2
the projection at regular intervals.
"""
def __init__(self, sampling_interval, projection):
self.interval = sampling_interval
self.projection = projection
self._weights = []
def __call__(self, t):
self._weights.append(self.projection.get('weight', format='list', with_address=False))
return t + self.interval
def get_weights(self):
return neo.AnalogSignalArray(self._weights, units='nA', sampling_period=self.interval * ms,
channel_index=numpy.arange(len(self._weights[0])),
name="weight")
weight_recorder = WeightRecorder(sampling_interval=1.0, projection=connections)
# === Run the simulation =====================================================
sim.run(t_stop, callbacks=[weight_recorder])
# === Save the results, optionally plot a figure =============================
filename = normalized_filename("Results", "simple_stdp", "pkl", options.simulator)
p2.write_data(filename, annotations={'script_name': __file__})
presynaptic_data = p1.get_data().segments[0]
postsynaptic_data = p2.get_data().segments[0]
print("Post-synaptic spike times: %s" % postsynaptic_data.spiketrains[0])
weights = weight_recorder.get_weights()
final_weights = numpy.array(weights[-1])
deltas = delta_t * numpy.arange(n // 2, -n // 2, -1)
print("Final weights: %s" % final_weights)
plasticity_data = DataTable(deltas, final_weights)
if options.fit_curve:
def double_exponential(t, t0, w0, wp, wn, tau):
return w0 + numpy.where(t >= t0, wp * numpy.exp(-(t - t0) / tau), wn * numpy.exp((t - t0) / t
p0 = (-1.0, 5e-8, 1e-8, -1.2e-8, 20.0)
popt, pcov = plasticity_data.fit_curve(double_exponential, p0, ftol=1e-10)
print("Best fit parameters: t0={0}, w0={1}, wp={2}, wn={3}, tau={4}".format(*popt))
if options.plot_figure:
from pyNN.utility.plotting import Figure, Panel, DataTable
figure_filename = filename.replace("pkl", "png")
Figure(
# raster plot of the presynaptic neuron spike times
Panel(presynaptic_data.spiketrains,
yticks=True, markersize=0.2, xlim=(0, t_stop)),
# membrane potential of the postsynaptic neuron
Panel(postsynaptic_data.filter(name='v')[0],
ylabel="Membrane potential (mV)",
78
Chapter 14. Examples
PyNN Documentation, Release 0.8.2
data_labels=[p2.label], yticks=True, xlim=(0, t_stop)),
# evolution of the synaptic weights with time
Panel(weights, xticks=True, yticks=True, xlabel="Time (ms)",
legend=False, xlim=(0, t_stop)),
# scatterplot of the final weight of each synapse against the relative
# timing of pre- and postsynaptic spikes for that synapse
Panel(plasticity_data,
xticks=True, yticks=True, xlim=(-n / 2 * delta_t, n / 2 * delta_t),
ylim=(0.9 * final_weights.min(), 1.1 * final_weights.max()),
xlabel="t_post - t_pre (ms)", ylabel="Final weight (nA)",
show_fit=options.fit_curve),
title="Simple STDP example",
annotations="Simulated with %s" % options.simulator.upper()
).save(figure_filename)
print(figure_filename)
# === Clean up and quit ========================================================
sim.end()
14.6. A very simple example of using STDP
79
PyNN Documentation, Release 0.8.2
14.7 Small network created with the Population and Projection
classes
# encoding: utf-8
"""
Small network created with the Population and Projection classes
Usage: random_numbers.py [-h] [--plot-figure] [--debug DEBUG] simulator
80
Chapter 14. Examples
PyNN Documentation, Release 0.8.2
positional arguments:
simulator
neuron, nest, brian or another backend simulator
optional arguments:
-h, --help
show this help message and exit
--plot-figure plot the simulation results to a file
--debug DEBUG print debugging information
"""
import numpy
from pyNN.utility import get_simulator, init_logging, normalized_filename
from pyNN.parameters import Sequence
from pyNN.random import RandomDistribution as rnd
sim, options = get_simulator(("--plot-figure", "Plot the simulation results to a file.", {"action": "
("--debug", "Print debugging information"))
if options.debug:
init_logging(None, debug=True)
# === Define parameters ========================================================
n = 20
# Number of cells
w = 0.002 # synaptic weight (µS)
cell_params = {
'tau_m'
: 20.0,
# (ms)
'tau_syn_E' : 2.0,
# (ms)
'tau_syn_I' : 4.0,
# (ms)
'e_rev_E'
: 0.0,
# (mV)
'e_rev_I'
: -70.0, # (mV)
'tau_refrac' : 2.0,
# (ms)
'v_rest'
: -60.0, # (mV)
'v_reset'
: -70.0, # (mV)
'v_thresh'
: -50.0, # (mV)
'cm'
: 0.5}
# (nF)
dt
= 0.1
# (ms)
syn_delay = 1.0
# (ms)
input_rate = 50.0
# (Hz)
simtime
= 1000.0
# (ms)
# === Build the network ========================================================
sim.setup(timestep=dt, max_delay=syn_delay)
cells = sim.Population(n, sim.IF_cond_alpha(**cell_params),
initial_values={'v': rnd('uniform', (-60.0, -50.0))},
label="cells")
number = int(2 * simtime * input_rate / 1000.0)
numpy.random.seed(26278342)
def generate_spike_times(i):
gen = lambda: Sequence(numpy.add.accumulate(numpy.random.exponential(1000.0 / input_rate, size=nu
if hasattr(i, "__len__"):
return [gen() for j in i]
14.7. Small network created with the Population and Projection classes
81
PyNN Documentation, Release 0.8.2
else:
return gen()
assert generate_spike_times(0).max() > simtime
spike_source = sim.Population(n, sim.SpikeSourceArray(spike_times=generate_spike_times))
spike_source.record('spikes')
cells.record('spikes')
cells[0:2].record(('v', 'gsyn_exc'))
syn = sim.StaticSynapse(weight=w, delay=syn_delay)
input_conns = sim.Projection(spike_source, cells, sim.FixedProbabilityConnector(0.5), syn)
# === Run simulation ===========================================================
sim.run(simtime)
filename = normalized_filename("Results", "small_network", "pkl",
options.simulator, sim.num_processes())
cells.write_data(filename, annotations={'script_name': __file__})
print("Mean firing rate: ", cells.mean_spike_count() * 1000.0 / simtime, "Hz")
if options.plot_figure:
from pyNN.utility.plotting import Figure, Panel
figure_filename = filename.replace("pkl", "png")
data = cells.get_data().segments[0]
vm = data.filter(name="v")[0]
gsyn = data.filter(name="gsyn_exc")[0]
Figure(
Panel(vm, ylabel="Membrane potential (mV)"),
Panel(gsyn, ylabel="Synaptic conductance (uS)"),
Panel(data.spiketrains, xlabel="Time (ms)", xticks=True),
annotations="Simulated with %s" % options.simulator.upper()
).save(figure_filename)
print(figure_filename)
# === Clean up and quit ========================================================
sim.end()
82
Chapter 14. Examples
PyNN Documentation, Release 0.8.2
14.7. Small network created with the Population and Projection classes
83
PyNN Documentation, Release 0.8.2
14.8 A demonstration of the responses of different standard neuron
models to synaptic input
84
Chapter 14. Examples
PyNN Documentation, Release 0.8.2
"""
A demonstration of the responses of different standard neuron models to synaptic input.
This should show that for the current-based synapses, the size of the excitatory
post-synaptic potential (EPSP) is constant, whereas for the conductance-based
synapses it depends on the value of the membrane potential.
Usage: python synaptic_input.py [-h] [--plot-figure] [--debug] simulator
positional arguments:
simulator
neuron, nest, brian or another backend simulator
optional arguments:
-h, --help
show this help message and exit
--plot-figure Plot the simulation results to a file.
--debug
Print debugging information
"""
from quantities import ms
from pyNN.utility import get_simulator, init_logging, normalized_filename
# === Configure the simulator ================================================
sim, options = get_simulator(("--plot-figure", "Plot the simulation results to a file.", {"action": "
("--debug", "Print debugging information"))
if options.debug:
init_logging(None, debug=True)
sim.setup(timestep=0.01, min_delay=1.0)
# === Build and instrument the network =======================================
# for each cell type we create two neurons, one of which we depolarize with
# injected current
cuba_exp = sim.Population(2, sim.IF_curr_exp(tau_m=10.0, i_offset=[0.0, 1.0]),
initial_values={"v": [-65, -55]}, label="Exponential, current-based")
cuba_alpha = sim.Population(2, sim.IF_curr_alpha(tau_m=10.0, i_offset=[0.0, 1.0]),
initial_values={"v": [-65, -55]}, label="Alpha, current-based")
coba_exp = sim.Population(2, sim.IF_cond_exp(tau_m=10.0, i_offset=[0.0, 1.0]),
initial_values={"v": [-65, -55]}, label="Exponential, conductance-based")
coba_alpha = sim.Population(2, sim.IF_cond_alpha(tau_m=10.0, i_offset=[0.0, 1.0]),
initial_values={"v": [-65, -55]}, label="Alpha, conductance-based")
v_step = sim.Population(2, sim.Izhikevich(i_offset=[0.0, 0.002]),
initial_values={"v": [-70, -67], "u": [-14, -13.4]}, label="Izhikevich")
# we next create a spike source, which will emit spikes at the specified times
spike_times = [25, 50, 80, 90]
stimulus = sim.Population(1, sim.SpikeSourceArray(spike_times=spike_times), label="Input spikes")
# now we connect the spike source to each of the neuron populations, with differing synaptic weights
14.8. A demonstration of the responses of different standard neuron models to synaptic input
85
PyNN Documentation, Release 0.8.2
all_neurons = cuba_exp + cuba_alpha + coba_exp + coba_alpha + v_step
connections = [sim.Projection(stimulus, population,
connector=sim.AllToAllConnector(),
synapse_type=sim.StaticSynapse(weight=w, delay=2.0),
receptor_type="excitatory")
for population, w in zip(all_neurons.populations, [1.6, 4.0, 0.03, 0.12, 1.0])]
# finally, we set up recording of the membrane potential
all_neurons.record('v')
# === Run the simulation =====================================================
sim.run(100.0)
# === Calculate the height of the first EPSP =================================
print("Height of first EPSP:")
for population in all_neurons.populations:
# retrieve the recorded data
vm = population.get_data().segments[0].filter(name='v')[0]
# take the data between the first and second incoming spikes
vm12 = vm.time_slice(spike_times[0] * ms, spike_times[1] * ms)
# calculate and print the EPSP height
for channel in (0, 1):
v_init = vm12[:, channel][0]
height = vm12[:, channel].max() - v_init
print(" {:<30} at {}: {}".format(population.label, v_init, height))
# === Save the results, optionally plot a figure =============================
filename = normalized_filename("Results", "synaptic_input", "pkl", options.simulator)
all_neurons.write_data(filename, annotations={'script_name': __file__})
if options.plot_figure:
from pyNN.utility.plotting import Figure, Panel
figure_filename = filename.replace("pkl", "png")
Figure(
Panel(cuba_exp.get_data().segments[0].filter(name='v')[0],
ylabel="Membrane potential (mV)",
data_labels=[cuba_exp.label], yticks=True, ylim=(-66, -50)),
Panel(cuba_alpha.get_data().segments[0].filter(name='v')[0],
data_labels=[cuba_alpha.label], yticks=True, ylim=(-66, -50)),
Panel(coba_exp.get_data().segments[0].filter(name='v')[0],
data_labels=[coba_exp.label], yticks=True, ylim=(-66, -50)),
Panel(coba_alpha.get_data().segments[0].filter(name='v')[0],
data_labels=[coba_alpha.label], yticks=True, ylim=(-66, -50)),
Panel(v_step.get_data().segments[0].filter(name='v')[0],
xticks=True, xlabel="Time (ms)",
data_labels=[v_step.label], yticks=True, ylim=(-71, -65)),
title="Responses of standard neuron models to synaptic input",
annotations="Simulated with %s" % options.simulator.upper()
).save(figure_filename)
print(figure_filename)
86
Chapter 14. Examples
PyNN Documentation, Release 0.8.2
# === Clean up and quit ========================================================
sim.end()
14.8. A demonstration of the responses of different standard neuron models to synaptic input
87
PyNN Documentation, Release 0.8.2
88
Chapter 14. Examples
PyNN Documentation, Release 0.8.2
14.9 Example of depressing and facilitating synapses
14.9. Example of depressing and facilitating synapses
89
PyNN Documentation, Release 0.8.2
# encoding: utf-8
"""
Example of depressing and facilitating synapses
Usage: tsodyksmarkram.py [-h] [--plot-figure] [--debug DEBUG] simulator
positional arguments:
simulator
neuron, nest, brian or another backend simulator
optional arguments:
-h, --help
show this help message and exit
--plot-figure Plot the simulation results to a file.
--debug DEBUG Print debugging information
"""
import numpy
from pyNN.utility import get_simulator, init_logging, normalized_filename
# === Configure the simulator ================================================
sim, options = get_simulator(("--plot-figure", "Plot the simulation results to a file.", {"action": "
("--debug", "Print debugging information"))
if options.debug:
init_logging(None, debug=True)
sim.setup(quit_on_end=False)
# === Build and instrument the network =======================================
spike_source = sim.Population(1, sim.SpikeSourceArray(spike_times=numpy.arange(10, 100, 10)))
connector = sim.AllToAllConnector()
synapse_types = {
'static': sim.StaticSynapse(weight=0.01, delay=0.5),
'depressing': sim.TsodyksMarkramSynapse(U=0.5, tau_rec=800.0, tau_facil=0.0,
weight=0.01, delay=0.5),
'facilitating': sim.TsodyksMarkramSynapse(U=0.04, tau_rec=100.0,
tau_facil=1000.0, weight=0.01,
delay=0.5),
}
populations = {}
projections = {}
for label in 'static', 'depressing', 'facilitating':
populations[label] = sim.Population(3, sim.IF_cond_exp(e_rev_I=-75, tau_syn_I=[1.2, 6.7, 4.3]), l
populations[label].record(['v', 'gsyn_inh'])
projections[label] = sim.Projection(spike_source, populations[label], connector,
receptor_type='inhibitory',
synapse_type=synapse_types[label])
spike_source.record('spikes')
90
Chapter 14. Examples
PyNN Documentation, Release 0.8.2
# === Run the simulation =====================================================
sim.run(200.0)
# === Save the results, optionally plot a figure =============================
for label, p in populations.items():
filename = normalized_filename("Results", "tsodyksmarkram_%s" % label,
"pkl", options.simulator)
p.write_data(filename, annotations={'script_name': __file__})
if options.plot_figure:
from pyNN.utility.plotting import Figure, Panel
figure_filename = normalized_filename("Results", "tsodyksmarkram",
"png", options.simulator)
panels = []
for variable in ('gsyn_inh', 'v'):
for population in populations.values():
panels.append(
Panel(population.get_data().segments[0].filter(name=variable)[0],
data_labels=[population.label], yticks=True),
)
# add ylabel to top panel in each group
panels[0].options.update(ylabel=u'Synaptic conductance (µS)')
panels[3].options.update(ylabel='Membrane potential (mV)')
# add xticks and xlabel to final panel
panels[-1].options.update(xticks=True, xlabel="Time (ms)")
Figure(*panels,
title="Example of static, facilitating and depressing synapses",
annotations="Simulated with %s" % options.simulator.upper()
).save(figure_filename)
print(figure_filename)
# === Clean up and quit ========================================================
sim.end()
14.9. Example of depressing and facilitating synapses
91
PyNN Documentation, Release 0.8.2
14.10 A demonstration of the use of callbacks to vary the rate of a
SpikeSourcePoisson
"""
A demonstration of the use of callbacks to vary the rate of a SpikeSourcePoisson.
Every 200 ms, the Poisson firing rate is increased by 20 spikes/s
Usage: varying_poisson.py [-h] [--plot-figure] simulator
positional arguments:
simulator
neuron, nest, brian or another backend simulator
optional arguments:
-h, --help
show this help message and exit
--plot-figure Plot the simulation results to a file.
"""
import numpy as np
from pyNN.utility import get_simulator, normalized_filename, ProgressBar
from pyNN.utility.plotting import Figure, Panel
sim, options = get_simulator(("--plot-figure", "Plot the simulation results to a file.",
{"action": "store_true"}))
rate_increment = 20
interval = 200
class SetRate(object):
"""
A callback which changes the firing rate of a population of poisson
processes at a fixed interval.
"""
92
Chapter 14. Examples
PyNN Documentation, Release 0.8.2
def __init__(self, population, rate_generator, interval=20.0):
assert isinstance(population.celltype, sim.SpikeSourcePoisson)
self.population = population
self.interval = interval
self.rate_generator = rate_generator
def __call__(self, t):
try:
self.population.set(rate=next(rate_generator))
except StopIteration:
pass
return t + self.interval
class MyProgressBar(object):
"""
A callback which draws a progress bar in the terminal.
"""
def __init__(self, interval, t_stop):
self.interval = interval
self.t_stop = t_stop
self.pb = ProgressBar(width=int(t_stop / interval), char=".")
def __call__(self, t):
self.pb(t / self.t_stop)
return t + self.interval
sim.setup()
# === Create a population of poisson processes ===============================
p = sim.Population(50, sim.SpikeSourcePoisson())
p.record('spikes')
# === Run the simulation, with two callback functions ========================
rate_generator = iter(range(0, 100, rate_increment))
sim.run(1000, callbacks=[MyProgressBar(10.0, 1000.0),
SetRate(p, rate_generator, interval)])
# === Retrieve recorded data, and count the spikes in each interval ==========
data = p.get_data().segments[0]
all_spikes = np.hstack([st.magnitude for st in data.spiketrains])
spike_counts = [((all_spikes >= x) & (all_spikes < x + interval)).sum()
for x in range(0, 1000, interval)]
expected_spike_counts = [p.size * rate * interval / 1000.0
for rate in range(0, 100, rate_increment)]
print("\nActual spike counts: {}".format(spike_counts))
print("Expected mean spike counts: {}".format(expected_spike_counts))
14.10. A demonstration of the use of callbacks to vary the rate of a SpikeSourcePoisson
93
PyNN Documentation, Release 0.8.2
if options.plot_figure:
Figure(
Panel(data.spiketrains, xlabel="Time (ms)", xticks=True),
title="Time varying Poisson spike trains",
annotations="Simulated with %s" % options.simulator.upper()
).save(normalized_filename("Results", "varying_poisson", "png", options.simulator))
sim.end()
14.11 Balanced network of excitatory and inhibitory neurons
# coding: utf-8
"""
Balanced network of excitatory and inhibitory neurons.
An implementation of benchmarks 1 and 2 from
Brette et al. (2007) Journal of Computational Neuroscience 23: 349-398
The network is based on the CUBA and COBA models of Vogels & Abbott
(J. Neurosci, 2005). The model consists of a network of excitatory and
inhibitory neurons, connected via current-based "exponential"
synapses (instantaneous rise, exponential decay).
Usage: python VAbenchmarks.py [-h] [--plot-figure] [--use-views] [--use-assembly]
[--use-csa] [--debug DEBUG]
simulator benchmark
positional arguments:
simulator
neuron, nest, brian or another backend simulator
benchmark
either CUBA or COBA
optional arguments:
-h, --help
show this help message and exit
--plot-figure
plot the simulation results to a file
--use-views
use population views in creating the network
--use-assembly use assemblies in creating the network
--use-csa
use the Connection Set Algebra to define the connectivity
--debug DEBUG
print debugging information
94
Chapter 14. Examples
PyNN Documentation, Release 0.8.2
Andrew Davison, UNIC, CNRS
August 2006
"""
import socket
from math import *
from pyNN.utility import get_simulator, Timer, ProgressBar, init_logging, normalized_filename
from pyNN.random import NumpyRNG, RandomDistribution
# === Configure the simulator ================================================
sim, options = get_simulator(
("benchmark", "either CUBA or COBA"),
("--plot-figure", "plot the simulation results to a file", {"action": "store_true
("--use-views", "use population views in creating the network", {"action": "store
("--use-assembly", "use assemblies in creating the network", {"action": "store_tr
("--use-csa", "use the Connection Set Algebra to define the connectivity", {"acti
("--debug", "print debugging information"))
if options.use_csa:
import csa
if options.debug:
init_logging(None, debug=True)
timer = Timer()
# === Define parameters ========================================================
threads = 1
rngseed = 98765
parallel_safe = True
n
r_ei
pconn
stim_dur
rate
=
=
=
=
=
4000
4.0
0.02
50.
100.
dt
tstop
delay
= 0.1
= 1000
= 0.2
#
#
#
#
#
number of cells
number of excitatory cells:number of inhibitory cells
connection probability
(ms) duration of random stimulation
(Hz) frequency of the random stimulation
# (ms) simulation timestep
# (ms) simulaton duration
# Cell parameters
area
= 20000. # (µm²)
tau_m
= 20.
# (ms)
cm
= 1.
# (µF/cm²)
g_leak
= 5e-5
# (S/cm²)
if options.benchmark == "COBA":
E_leak
= -60. # (mV)
elif options.benchmark == "CUBA":
E_leak
= -49. # (mV)
v_thresh = -50.
# (mV)
v_reset = -60.
# (mV)
t_refrac = 5.
# (ms) (clamped at v_reset)
v_mean
= -60.
# (mV) 'mean' membrane potential, for calculating CUBA weights
14.11. Balanced network of excitatory and inhibitory neurons
95
PyNN Documentation, Release 0.8.2
tau_exc
tau_inh
= 5.
= 10.
# (ms)
# (ms)
# Synapse parameters
if options.benchmark == "COBA":
Gexc = 4.
# (nS)
Ginh = 51.
# (nS)
elif options.benchmark == "CUBA":
Gexc = 0.27
# (nS) #Those weights should be similar to the COBA weights
Ginh = 4.5
# (nS) # but the delpolarising drift should be taken into account
Erev_exc = 0.
# (mV)
Erev_inh = -80.
# (mV)
### what is the synaptic delay???
# === Calculate derived parameters =============================================
area = area*1e-8
# convert to cm²
cm
= cm*area*1000
# convert to nF
Rm
= 1e-6/(g_leak*area)
# membrane resistance in MΩ
assert tau_m == cm*Rm
# just to check
n_exc = int(round((n*r_ei/(1+r_ei)))) # number of excitatory cells
n_inh = n - n_exc
# number of inhibitory cells
if options.benchmark == "COBA":
celltype = sim.IF_cond_exp
w_exc
= Gexc*1e-3
# We convert conductances to uS
w_inh
= Ginh*1e-3
elif options.benchmark == "CUBA":
celltype = sim.IF_curr_exp
w_exc = 1e-3*Gexc*(Erev_exc - v_mean) # (nA) weight of excitatory synapses
w_inh = 1e-3*Ginh*(Erev_inh - v_mean) # (nA)
assert w_exc > 0; assert w_inh < 0
# === Build the network ========================================================
extra = {'threads' : threads,
'filename': "va_%s.xml" % options.benchmark,
'label': 'VA'}
if options.simulator == "neuroml":
extra["file"] = "VAbenchmarks.xml"
node_id = sim.setup(timestep=dt, min_delay=delay, max_delay=1.0, **extra)
np = sim.num_processes()
host_name = socket.gethostname()
print("Host #%d is on %s" % (node_id + 1, host_name))
print("%s Initialising the simulator with %d thread(s)..." % (node_id, extra['threads']))
cell_params = {
'tau_m'
'v_rest'
'cm'
: tau_m,
: E_leak,
: cm,
'tau_syn_E' : tau_exc, 'tau_syn_I'
'v_reset'
: v_reset, 'v_thresh'
'tau_refrac' : t_refrac}
: tau_inh,
: v_thresh,
if (options.benchmark == "COBA"):
cell_params['e_rev_E'] = Erev_exc
cell_params['e_rev_I'] = Erev_inh
96
Chapter 14. Examples
PyNN Documentation, Release 0.8.2
timer.start()
print("%s Creating cell populations..." % node_id)
if options.use_views:
# create a single population of neurons, and then use population views to define
# excitatory and inhibitory sub-populations
all_cells = sim.Population(n_exc + n_inh, celltype(**cell_params), label="All Cells")
exc_cells = all_cells[:n_exc]
exc_cells.label = "Excitatory cells"
inh_cells = all_cells[n_exc:]
inh_cells.label = "Inhibitory cells"
else:
# create separate populations for excitatory and inhibitory neurons
exc_cells = sim.Population(n_exc, celltype(**cell_params), label="Excitatory_Cells")
inh_cells = sim.Population(n_inh, celltype(**cell_params), label="Inhibitory_Cells")
if options.use_assembly:
# group the populations into an assembly
all_cells = exc_cells + inh_cells
if options.benchmark == "COBA":
ext_stim = sim.Population(20, sim.SpikeSourcePoisson(rate=rate, duration=stim_dur), label="expois
rconn = 0.01
ext_conn = sim.FixedProbabilityConnector(rconn)
ext_syn = sim.StaticSynapse(weight=0.1)
print("%s Initialising membrane potential to random values..." % node_id)
rng = NumpyRNG(seed=rngseed, parallel_safe=parallel_safe)
uniformDistr = RandomDistribution('uniform', low=v_reset, high=v_thresh, rng=rng)
if options.use_views:
all_cells.initialize(v=uniformDistr)
else:
exc_cells.initialize(v=uniformDistr)
inh_cells.initialize(v=uniformDistr)
print("%s Connecting populations..." % node_id)
progress_bar = ProgressBar(width=20)
if options.use_csa:
connector = sim.CSAConnector(csa.cset(csa.random(pconn)))
else:
connector = sim.FixedProbabilityConnector(pconn, rng=rng, callback=progress_bar)
exc_syn = sim.StaticSynapse(weight=w_exc, delay=delay)
inh_syn = sim.StaticSynapse(weight=w_inh, delay=delay)
connections = {}
if options.use_views or options.use_assembly:
connections['exc'] = sim.Projection(exc_cells, all_cells, connector, exc_syn, receptor_type='exci
connections['inh'] = sim.Projection(inh_cells, all_cells, connector, inh_syn, receptor_type='inhi
if (options.benchmark == "COBA"):
connections['ext'] = sim.Projection(ext_stim, all_cells, ext_conn, ext_syn, receptor_type='ex
else:
connections['e2e'] = sim.Projection(exc_cells, exc_cells, connector, exc_syn, receptor_type='exci
connections['e2i'] = sim.Projection(exc_cells, inh_cells, connector, exc_syn, receptor_type='exci
connections['i2e'] = sim.Projection(inh_cells, exc_cells, connector, inh_syn, receptor_type='inhi
connections['i2i'] = sim.Projection(inh_cells, inh_cells, connector, inh_syn, receptor_type='inhi
if (options.benchmark == "COBA"):
connections['ext2e'] = sim.Projection(ext_stim, exc_cells, ext_conn, ext_syn, receptor_type='
connections['ext2i'] = sim.Projection(ext_stim, inh_cells, ext_conn, ext_syn, receptor_type='
14.11. Balanced network of excitatory and inhibitory neurons
97
PyNN Documentation, Release 0.8.2
# === Setup recording ==========================================================
print("%s Setting up recording..." % node_id)
if options.use_views or options.use_assembly:
all_cells.record('spikes')
exc_cells[[0, 1]].record('v')
else:
exc_cells.record('spikes')
inh_cells.record('spikes')
exc_cells[0, 1].record('v')
buildCPUTime = timer.diff()
# === Save connections to file =================================================
#for prj in connections.keys():
#connections[prj].saveConnections('Results/VAbenchmark_%s_%s_%s_np%d.conn' % (benchmark, prj, opt
saveCPUTime = timer.diff()
# === Run simulation ===========================================================
print("%d Running simulation..." % node_id)
sim.run(tstop)
simCPUTime = timer.diff()
E_count = exc_cells.mean_spike_count()
I_count = inh_cells.mean_spike_count()
# === Print results to file ====================================================
print("%d Writing data to file..." % node_id)
filename = normalized_filename("Results", "VAbenchmarks_%s_exc" % options.benchmark, "pkl",
options.simulator, np)
exc_cells.write_data(filename,
annotations={'script_name': __file__})
inh_cells.write_data(filename.replace("exc", "inh"),
annotations={'script_name': __file__})
writeCPUTime = timer.diff()
if options.use_views or options.use_assembly:
connections = "%d e→e,i %d i→e,i" % (connections['exc'].size(),
connections['inh'].size())
else:
connections = u"%d e→e %d e→i %d i→e %d i→i" % (connections['e2e'].size(),
connections['e2i'].size(),
connections['i2e'].size(),
connections['i2i'].size())
if node_id == 0:
print("\n--- Vogels-Abbott Network Simulation ---")
print("Nodes
: %d" % np)
print("Simulation type
: %s" % options.benchmark)
print("Number of Neurons
: %d" % n)
print("Number of Synapses
: %s" % connections)
print("Excitatory conductance : %g nS" % Gexc)
98
Chapter 14. Examples
PyNN Documentation, Release 0.8.2
print("Inhibitory conductance
print("Excitatory rate
print("Inhibitory rate
print("Build time
#print("Save connections time
print("Simulation time
print("Writing time
:
:
:
:
%g nS" % Ginh)
%g Hz" % (E_count * 1000.0 / tstop,))
%g Hz" % (I_count * 1000.0 / tstop,))
%g s" % buildCPUTime)
: %g s" % saveCPUTime)
: %g s" % simCPUTime)
: %g s" % writeCPUTime)
# === Finished with simulator ==================================================
sim.end()
14.11. Balanced network of excitatory and inhibitory neurons
99
PyNN Documentation, Release 0.8.2
100
Chapter 14. Examples
CHAPTER 15
Contributors and licence
The following people have contributed to PyNN. Their affiliations at the time of the contributions are shown below.
• Andrew Davison [1]
• Pierre Yger [1, 9]
• Eilif Muller [7, 13]
• Jens Kremkow [5,6]
• Daniel Brüderle [2]
• Laurent Perrinet [6]
• Jochen Eppler [3,4]
• Dejan Pecevski [8]
• Nicolas Debeissat [10]
• Mikael Djurfeldt [12, 15]
• Michael Schmuker [11]
• Bernhard Kaplan [2]
• Thomas Natschlaeger [8]
• Subhasis Ray [14]
• Yury Zaytsev [16]
• Jan Antolik [1]
• Alexandre Gravier
• Thomas Close [17]
• Oliver Breitwieser [2]
• Jannis Schücker [16]
• Maximilian Schmidt [16]
1. Unité de Neuroscience, Information et Complexité, CNRS, Gif sur Yvette, France
2. Kirchhoff Institute for Physics, University of Heidelberg, Heidelberg, Germany
3. Honda Research Institute Europe GmbH, Offenbach, Germany
4. Bernstein Center for Computational Neuroscience, Albert-Ludwigs-University, Freiburg, Germany
101
PyNN Documentation, Release 0.8.2
5. Neurobiology and Biophysics, Institute of Biology III, Albert-Ludwigs-University, Freiburg, Germany
6. Institut de Neurosciences Cognitives de la Méditerranée, CNRS, Marseille, France
7. Laboratory of Computational Neuroscience, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
8. Institute for Theoretical Computer Science, Graz University of Technology, Graz, Austria
9. Department of Bioengineering, Imperial College London
10. INRIA, Sophia Antipolis, France
11. Neuroinformatics & Theoretical Neuroscience, Freie Universität Berlin, Berlin, Germany
12. International Neuroinformatics Coordinating Facility, Stockholm, Sweden
13. Blue Brain Project, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
14. NCBS, Bangalore, India
15. PDC, KTH, Stockholm, Sweden
16. Institute of Neuroscience and Medicine (INM-6), Jülich Research Center, Jülich, Germany
17. Okinawa Institute of Science and Technology (OIST), Onna-son, Okinawa, Japan
15.1 Licence
PyNN is freely available under the CeCILL v2 license, which is equivalent to, and compatible with, the
GNU GPL license, but conforms to French law (and is also perfectly suited to international projects) - see
http://www.cecill.info/index.en.html for more information. The choice of GPL-equivalence was made to match the
licenses of other widely-used simulation software in computational neuroscience, such as NEURON (GPL) and Brian
(CeCILL).
If you are interested in using PyNN, but the choice of licence is a problem for you, please contact us to discuss
dual-licensing.
LICENSE AGREEMENT
CeCILL FREE SOFTWARE LICENSE AGREEMENT
Notice
This Agreement is a Free Software license agreement that is the result of discussions between its authors in order to
ensure compliance with the two main principles guiding its drafting:
• firstly, compliance with the principles governing the distribution of Free Software: access to source code, broad
rights granted to users,
• secondly, the election of a governing law, French law, with which it is conformant, both as regards the law of
torts and intellectual property law, and the protection that it offers to both authors and holders of the economic
rights over software.
The authors of the CeCILL (for Ce[a] C[nrs] I[nria] L[ogiciel] L[ibre]) license are:
Commissariat à l’Energie Atomique - CEA, a public scientific, technical and industrial research establishment, having
its principal place of business at 25 rue Leblanc, immeuble Le Ponant D, 75015 Paris, France.
Centre National de la Recherche Scientifique - CNRS, a public scientific and technological establishment, having its
principal place of business at 3 rue Michel-Ange, 75794 Paris cedex 16, France.
Institut National de Recherche en Informatique et en Automatique - INRIA, a public scientific and technological
establishment, having its principal place of business at Domaine de Voluceau, Rocquencourt, BP 105, 78153 Le
Chesnay cedex, France.
102
Chapter 15. Contributors and licence
PyNN Documentation, Release 0.8.2
Preamble
The purpose of this Free Software license agreement is to grant users the right to modify and redistribute the software
governed by this license within the framework of an open source distribution model.
The exercising of these rights is conditional upon certain obligations for users so as to preserve this status for all
subsequent redistributions.
In consideration of access to the source code and the rights to copy, modify and redistribute granted by the license,
users are provided only with a limited warranty and the software’s author, the holder of the economic rights, and the
successive licensors only have limited liability.
In this respect, the risks associated with loading, using, modifying and/or developing or reproducing the software by
the user are brought to the user’s attention, given its Free Software status, which may make it complicated to use, with
the result that its use is reserved for developers and experienced professionals having in-depth computer knowledge.
Users are therefore encouraged to load and test the suitability of the software as regards their requirements in conditions
enabling the security of their systems and/or data to be ensured and, more generally, to use and operate it in the same
conditions of security. This Agreement may be freely reproduced and published, provided it is not altered, and that no
provisions are either added or removed herefrom.
This Agreement may apply to any or all software for which the holder of the economic rights decides to submit the
use thereof to its provisions.
Article 1 - DEFINITIONS
For the purpose of this Agreement, when the following expressions commence with a capital letter, they shall have the
following meaning:
Agreement: means this license agreement, and its possible subsequent versions and annexes.
Software: means the software in its Object Code and/or Source Code form and, where applicable, its documentation,
“as is” when the Licensee accepts the Agreement.
Initial Software: means the Software in its Source Code and possibly its Object Code form and, where applicable, its
documentation, “as is” when it is first distributed under the terms and conditions of the Agreement.
Modified Software: means the Software modified by at least one Contribution.
Source Code: means all the Software’s instructions and program lines to which access is required so as to modify the
Software.
Object Code: means the binary files originating from the compilation of the Source Code.
Holder: means the holder(s) of the economic rights over the Initial Software.
Licensee: means the Software user(s) having accepted the Agreement.
Contributor: means a Licensee having made at least one Contribution.
Licensor: means the Holder, or any other individual or legal entity, who distributes the Software under the Agreement.
Contribution: means any or all modifications, corrections, translations, adaptations and/or new functions integrated
into the Software by any or all Contributors, as well as any or all Internal Modules.
Module: means a set of sources files including their documentation that enables supplementary functions or services
in addition to those offered by the Software.
External Module: means any or all Modules, not derived from the Software, so that this Module and the Software run
in separate address spaces, with one calling the other when they are run.
Internal Module: means any or all Module, connected to the Software so that they both execute in the same address
space.
GNU GPL: means the GNU General Public License version 2 or any subsequent version, as published by the Free
Software Foundation Inc.
15.1. Licence
103
PyNN Documentation, Release 0.8.2
Parties: mean both the Licensee and the Licensor.
These expressions may be used both in singular and plural form.
Article 2 - PURPOSE
The purpose of the Agreement is the grant by the Licensor to the Licensee of a non-exclusive, transferable and worldwide license for the Software as set forth in Article 5 hereinafter for the whole term of the protection granted by the
rights over said Software.
Article 3 - ACCEPTANCE
3.1 The Licensee shall be deemed as having accepted the terms and conditions of this Agreement upon the occurrence
of the first of the following events:
• (i) loading the Software by any or all means, notably, by downloading from a remote server, or by loading from
a physical medium;
• (ii) the first time the Licensee exercises any of the rights granted hereunder.
3.2 One copy of the Agreement, containing a notice relating to the characteristics of the Software, to the limited
warranty, and to the fact that its use is restricted to experienced users has been provided to the Licensee prior to
its acceptance as set forth in Article 3.1 hereinabove, and the Licensee hereby acknowledges that it has read and
understood it.
Article 4 - EFFECTIVE DATE AND TERM
4.1 EFFECTIVE DATE
The Agreement shall become effective on the date when it is accepted by the Licensee as set forth in Article 3.1.
4.2 TERM
The Agreement shall remain in force for the entire legal term of protection of the economic rights over the Software.
Article 5 - SCOPE OF RIGHTS GRANTED
The Licensor hereby grants to the Licensee, who accepts, the following rights over the Software for any or all use, and
for the term of the Agreement, on the basis of the terms and conditions set forth hereinafter.
Besides, if the Licensor owns or comes to own one or more patents protecting all or part of the functions of the Software
or of its components, the Licensor undertakes not to enforce the rights granted by these patents against successive
Licensees using, exploiting or modifying the Software. If these patents are transferred, the Licensor undertakes to
have the transferees subscribe to the obligations set forth in this paragraph.
5.1 RIGHT OF USE
The Licensee is authorized to use the Software, without any limitation as to its fields of application, with it being
hereinafter specified that this comprises:
1. permanent or temporary reproduction of all or part of the Software by any or all means and in any or all form.
2. loading, displaying, running, or storing the Software on any or all medium.
3. entitlement to observe, study or test its operation so as to determine the ideas and principles behind any or
all constituent elements of said Software. This shall apply when the Licensee carries out any or all loading,
displaying, running, transmission or storage operation as regards the Software, that it is entitled to carry out
hereunder.
5.2 ENTITLEMENT TO MAKE CONTRIBUTIONS
The right to make Contributions includes the right to translate, adapt, arrange, or make any or all modifications to the
Software, and the right to reproduce the resulting software.
The Licensee is authorized to make any or all Contributions to the Software provided that it includes an explicit notice
that it is the author of said Contribution and indicates the date of the creation thereof.
104
Chapter 15. Contributors and licence
PyNN Documentation, Release 0.8.2
5.3 RIGHT OF DISTRIBUTION
In particular, the right of distribution includes the right to publish, transmit and communicate the Software to the
general public on any or all medium, and by any or all means, and the right to market, either in consideration of a fee,
or free of charge, one or more copies of the Software by any means.
The Licensee is further authorized to distribute copies of the modified or unmodified Software to third parties according
to the terms and conditions set forth hereinafter.
5.3.1 DISTRIBUTION OF SOFTWARE WITHOUT MODIFICATION
The Licensee is authorized to distribute true copies of the Software in Source Code or Object Code form, provided
that said distribution complies with all the provisions of the Agreement and is accompanied by:
1. a copy of the Agreement,
2. a notice relating to the limitation of both the Licensor’s warranty and liability as set forth in Articles 8 and 9,
and that, in the event that only the Object Code of the Software is redistributed, the Licensee allows future Licensees
unhindered access to the full Source Code of the Software by indicating how to access it, it being understood that the
additional cost of acquiring the Source Code shall not exceed the cost of transferring the data.
5.3.2 DISTRIBUTION OF MODIFIED SOFTWARE
When the Licensee makes a Contribution to the Software, the terms and conditions for the distribution of the resulting
Modified Software become subject to all the provisions of this Agreement.
The Licensee is authorized to distribute the Modified Software, in source code or object code form, provided that said
distribution complies with all the provisions of the Agreement and is accompanied by:
1. a copy of the Agreement,
2. a notice relating to the limitation of both the Licensor’s warranty and liability as set forth in Articles 8 and 9,
and that, in the event that only the object code of the Modified Software is redistributed, the Licensee allows future
Licensees unhindered access to the full source code of the Modified Software by indicating how to access it, it being
understood that the additional cost of acquiring the source code shall not exceed the cost of transferring the data.
5.3.3 DISTRIBUTION OF EXTERNAL MODULES
When the Licensee has developed an External Module, the terms and conditions of this Agreement do not apply to
said External Module, that may be distributed under a separate license agreement.
5.3.4 COMPATIBILITY WITH THE GNU GPL
The Licensee can include a code that is subject to the provisions of one of the versions of the GNU GPL in the Modified
or unmodified Software, and distribute that entire code under the terms of the same version of the GNU GPL.
The Licensee can include the Modified or unmodified Software in a code that is subject to the provisions of one of the
versions of the GNU GPL, and distribute that entire code under the terms of the same version of the GNU GPL.
Article 6 - INTELLECTUAL PROPERTY
6.1 OVER THE INITIAL SOFTWARE
The Holder owns the economic rights over the Initial Software. Any or all use of the Initial Software is subject to
compliance with the terms and conditions under which the Holder has elected to distribute its work and no one shall
be entitled to modify the terms and conditions for the distribution of said Initial Software.
The Holder undertakes that the Initial Software will remain ruled at least by this Agreement, for the duration set forth
in Article 4.2.
6.2 OVER THE CONTRIBUTIONS
The Licensee who develops a Contribution is the owner of the intellectual property rights over this Contribution as
defined by applicable law.
15.1. Licence
105
PyNN Documentation, Release 0.8.2
6.3 OVER THE EXTERNAL MODULES
The Licensee who develops an External Module is the owner of the intellectual property rights over this External
Module as defined by applicable law and is free to choose the type of agreement that shall govern its distribution.
6.4 JOINT PROVISIONS
The Licensee expressly undertakes:
1. not to remove, or modify, in any manner, the intellectual property notices attached to the Software;
2. to reproduce said notices, in an identical manner, in the copies of the Software modified or not.
The Licensee undertakes not to directly or indirectly infringe the intellectual property rights of the Holder and/or
Contributors on the Software and to take, where applicable, vis-à-vis its staff, any and all measures required to ensure
respect of said intellectual property rights of the Holder and/or Contributors.
Article 7 - RELATED SERVICES
7.1 Under no circumstances shall the Agreement oblige the Licensor to provide technical assistance or maintenance
services for the Software.
However, the Licensor is entitled to offer this type of services. The terms and conditions of such technical assistance,
and/or such maintenance, shall be set forth in a separate instrument. Only the Licensor offering said maintenance
and/or technical assistance services shall incur liability therefor.
7.2 Similarly, any Licensor is entitled to offer to its licensees, under its sole responsibility, a warranty, that shall only
be binding upon itself, for the redistribution of the Software and/or the Modified Software, under terms and conditions
that it is free to decide. Said warranty, and the financial terms and conditions of its application, shall be subject of a
separate instrument executed between the Licensor and the Licensee.
Article 8 - LIABILITY
8.1 Subject to the provisions of Article 8.2, the Licensee shall be entitled to claim compensation for any direct loss
it may have suffered from the Software as a result of a fault on the part of the relevant Licensor, subject to providing
evidence thereof.
8.2 The Licensor’s liability is limited to the commitments made under this Agreement and shall not be incurred
as a result of in particular: (i) loss due the Licensee’s total or partial failure to fulfill its obligations, (ii) direct or
consequential loss that is suffered by the Licensee due to the use or performance of the Software, and (iii) more
generally, any consequential loss. In particular the Parties expressly agree that any or all pecuniary or business loss
(i.e. loss of data, loss of profits, operating loss, loss of customers or orders, opportunity cost, any disturbance to
business activities) or any or all legal proceedings instituted against the Licensee by a third party, shall constitute
consequential loss and shall not provide entitlement to any or all compensation from the Licensor.
Article 9 - WARRANTY
9.1 The Licensee acknowledges that the scientific and technical state-of-the-art when the Software was distributed
did not enable all possible uses to be tested and verified, nor for the presence of possible defects to be detected. In
this respect, the Licensee’s attention has been drawn to the risks associated with loading, using, modifying and/or
developing and reproducing the Software which are reserved for experienced users.
The Licensee shall be responsible for verifying, by any or all means, the suitability of the product for its requirements,
its good working order, and for ensuring that it shall not cause damage to either persons or properties.
9.2 The Licensor hereby represents, in good faith, that it is entitled to grant all the rights over the Software (including
in particular the rights set forth in Article 5).
9.3 The Licensee acknowledges that the Software is supplied “as is” by the Licensor without any other express or tacit
warranty, other than that provided for in Article 9.2 and, in particular, without any warranty as to its commercial value,
its secured, safe, innovative or relevant nature.
106
Chapter 15. Contributors and licence
PyNN Documentation, Release 0.8.2
Specifically, the Licensor does not warrant that the Software is free from any error, that it will operate without interruption, that it will be compatible with the Licensee’s own equipment and software configuration, nor that it will meet
the Licensee’s requirements.
9.4 The Licensor does not either expressly or tacitly warrant that the Software does not infringe any third party
intellectual property right relating to a patent, software or any other property right. Therefore, the Licensor disclaims
any and all liability towards the Licensee arising out of any or all proceedings for infringement that may be instituted in
respect of the use, modification and redistribution of the Software. Nevertheless, should such proceedings be instituted
against the Licensee, the Licensor shall provide it with technical and legal assistance for its defense. Such technical
and legal assistance shall be decided on a case-by-case basis between the relevant Licensor and the Licensee pursuant
to a memorandum of understanding. The Licensor disclaims any and all liability as regards the Licensee’s use of the
name of the Software. No warranty is given as regards the existence of prior rights over the name of the Software or
as regards the existence of a trademark.
Article 10 - TERMINATION
10.1 In the event of a breach by the Licensee of its obligations hereunder, the Licensor may automatically terminate
this Agreement thirty (30) days after notice has been sent to the Licensee and has remained ineffective.
10.2 A Licensee whose Agreement is terminated shall no longer be authorized to use, modify or distribute the Software.
However, any licenses that it may have granted prior to termination of the Agreement shall remain valid subject to
their having been granted in compliance with the terms and conditions hereof.
Article 11 - MISCELLANEOUS
11.1 EXCUSABLE EVENTS
Neither Party shall be liable for any or all delay, or failure to perform the Agreement, that may be attributable to
an event of force majeure, an act of God or an outside cause, such as defective functioning or interruptions of the
electricity or telecommunications networks, network paralysis following a virus attack, intervention by government
authorities, natural disasters, water damage, earthquakes, fire, explosions, strikes and labor unrest, war, etc.
11.2 Any failure by either Party, on one or more occasions, to invoke one or more of the provisions hereof, shall
under no circumstances be interpreted as being a waiver by the interested Party of its right to invoke said provision(s)
subsequently.
11.3 The Agreement cancels and replaces any or all previous agreements, whether written or oral, between the Parties
and having the same purpose, and constitutes the entirety of the agreement between said Parties concerning said
purpose. No supplement or modification to the terms and conditions hereof shall be effective as between the Parties
unless it is made in writing and signed by their duly authorized representatives.
11.4 In the event that one or more of the provisions hereof were to conflict with a current or future applicable act or
legislative text, said act or legislative text shall prevail, and the Parties shall make the necessary amendments so as to
comply with said act or legislative text. All other provisions shall remain effective. Similarly, invalidity of a provision
of the Agreement, for any reason whatsoever, shall not cause the Agreement as a whole to be invalid.
11.5 LANGUAGE
The Agreement is drafted in both French and English and both versions are deemed authentic.
Article 12 - NEW VERSIONS OF THE AGREEMENT
12.1 Any person is authorized to duplicate and distribute copies of this Agreement.
12.2 So as to ensure coherence, the wording of this Agreement is protected and may only be modified by the authors
of the License, who reserve the right to periodically publish updates or new versions of the Agreement, each with a
separate number. These subsequent versions may address new issues encountered by Free Software.
12.3 Any Software distributed under a given version of the Agreement may only be subsequently distributed under the
same version of the Agreement or a subsequent version, subject to the provisions of Article 5.3.4.
Article 13 - GOVERNING LAW AND JURISDICTION
15.1. Licence
107
PyNN Documentation, Release 0.8.2
13.1 The Agreement is governed by French law. The Parties agree to endeavor to seek an amicable solution to any
disagreements or disputes that may arise during the performance of the Agreement.
13.2 Failing an amicable solution within two (2) months as from their occurrence, and unless emergency proceedings
are necessary, the disagreements or disputes shall be referred to the Paris Courts having jurisdiction, by the more
diligent Party.
Version 2.0 dated 2006-09-05.
108
Chapter 15. Contributors and licence
CHAPTER 16
Release notes
16.1 PyNN 0.8.2 release notes
6th December 2016
Welcome to PyNN 0.8.2!
16.1.1 New spike sources
Two new spike source models were added, with implementations for the NEST and NEURON backends:
SpikeSourceGamma (spikes follow a gamma process) and SpikeSourcePoissonRefractory (inter-spike
intervals are drawn from an exponential distribution as for a Poisson process, but there is a fixed refractory period after
each spike during which no spike can occur).
16.1.2 Other changes
• Changed the save_positions() format from id x y z to index x y z to make it simulator independent.
• Added histograms to the utility.plotting module.
• Added a multiple_synapses flag to Projection.get(..., format="array")() to control how
synaptic parameters are combined when there are multiple connections between pairs of neurons. Until now, parameters were summed, which made sense for weights but not for delays. We have adopted the Brian approach
of adding an argument multiple_synapses which is one of {’last’, ’first’, ’sum’, ’min’,
’max’}. The default is sum.
• Assorted bug fixes
16.2 PyNN 0.8.1 release notes
25th May 2016
Welcome to PyNN 0.8.1!
16.2.1 NEST 2.10
This release introduces support for NEST 2.10. Previous versions of NEST are no longer supported.
109
PyNN Documentation, Release 0.8.2
16.2.2 Other changes
• Assorted bug fixes
16.3 PyNN 0.8.0 release notes
October 5th 2015
Welcome to the final release of PyNN 0.8.0!
For PyNN 0.8 we have taken the opportunity to make significant, backward-incompatible changes to the API. The aim
was fourfold:
• to simplify the API, making it more consistent and easier to remember;
• to make the API more powerful, so more complex models can be expressed with less code;
• to allow a number of internal simplifications so it is easier for new developers to contribute;
• to prepare for planned future extensions, notably support for multi-compartmental models.
We summarize here the main changes between versions 0.7 and 0.8 of the API.
16.3.1 Creating populations
In previous versions of PyNN, the Population constructor was called with the population size, a BaseCellType
sub-class such as IF_cond_exp and a dictionary of parameter values. For example:
p = Population(1000, IF_cond_exp, {'tau_m': 12.0, 'cm': 0.8})
# PyNN 0.7
This dictionary was passed to the cell-type class constructor within the Population constructor to create a cell-type
instance.
The reason for doing this was that in early versions of PyNN, use of native NEST models was supported by passing a
string, the model name, as the cell-type argument. Since PyNN 0.7, however, native models have been supported with
the NativeCellType class, and passing a string is no longer allowed.
It makes more sense, therefore, for the cell-type instance to be created by the user, and to pass a cell-type instance,
rather than a cell-type class, to the Population constructor.
There is also a second change: specification of parameters for cell-type classes is now done via keyword arguments
rather than a single parameter dictionary. This is for consistency with current sources and synaptic plasticity models,
which already use keyword arguments.
The example above should be rewritten as:
p = Population(1000, IF_cond_exp(tau_m=12.0, cm=0.8))
# PyNN 0.8
or:
p = Population(1000, IF_cond_exp(**{'tau_m': 12.0, 'cm': 0.8}))
# PyNN 0.8
or:
cell_type = IF_cond_exp(tau_m=12.0, cm=0.8)
p = Population(1000, cell_type)
# PyNN 0.8
The first form, with a separate parameter dictionary, is still supported for the time being, but is deprecated and may be
removed in future versions.
110
Chapter 16. Release notes
PyNN Documentation, Release 0.8.2
16.3.2 Specifying heterogeneous parameter values
In previous versions of PyNN, the Population constructor supported setting parameters to either homogeneous
values (all cells in the population have the same value) or random values. After construction, it was possible to change
parameters using the Population.set(), Population.tset() (for topographic set - parameters were set by
using an array of the same size as the population) and Population.rset() (for random set) methods.
In PyNN 0.8, setting parameters is simpler and more consistent, in that both when constructing a cell type for use in
the Population constructor (see above) and in the Population.set() method, parameter values can be any of
the following:
• a single number - sets the same value for all cells in the Population;
• a RandomDistribution object - for each cell, sets a different random value drawn from the distribution;
• a list or 1D NumPy array of the same size as the Population;
• a function that takes a single integer argument; this function will be called with the index of every cell in the
Population to return the parameter value for that cell.
See Model parameters and initial values for more details and examples.
The call signature of the Population.set() method has also been changed; now parameters should be specified
as keyword arguments. For example, instead of:
p.set({"tau_m": 20.0})
# PyNN 0.7
use:
p.set(tau_m=20.0)
# PyNN 0.8
and instead of:
p.set({"tau_m": 20.0, "v_rest": -65})
# PyNN 0.7
use:
p.set(tau_m=20.0, v_rest=-65)
# PyNN 0.8
Now that Population.set() accepts random distributions and arrays as arguments, the Population.tset()
and Population.rset() methods are superfluous. As of version 0.8, their use is deprecated and they will be
removed in the next version of PyNN. Their use can be replaced as follows:
p.tset("i_offset", arr)
p.set(i_offset=arr)
# PyNN 0.7
# PyNN 0.8
p.rset("tau_m", rand_distr)
p.set(tau_m=rand_distr)
# PyNN 0.7
# PyNN 0.8
Setting spike times
Where a single parameter value is already an array, e.g. spike times, this should be wrapped by a Sequence object.
For example, to generate a different Poisson spike train for every neuron in a population of SpikeSourceArrays:
def generate_spike_times(i_range):
return [Sequence(numpy.add.accumulate(numpy.random.exponential(10.0, size=10)))
for i in i_range]
p = sim.Population(30, sim.SpikeSourceArray(spike_times=generate_spike_times))
16.3. PyNN 0.8.0 release notes
111
PyNN Documentation, Release 0.8.2
Standardization of random distributions
Since its earliest versions PyNN has supported swapping in and out different random number generators, but until now
there has been no standardization of these RNGs; for example the GSL random number library uses “gaussian” where
NumPy uses “normal”. This limited the usefulness of this feature, especially for the NativeRNG class, which signals
that random number generation should be passed down to the simulator backend rather than being performed at the
Python level.
This has now been fixed. The names of random number distributions and of their parameters have now been standardized, based for the most part on the nomenclature used by Wikipedia. A quick example:
from pyNN.random import NumpyRNG, GSLRNG, RandomDistribution
rd1 = RandomDistribution('normal' mu=0.5, sigma=0.1, rng=NumpyRNG(seed=922843))
rd2 = RandomDistribution('normal' mu=0.5, sigma=0.1, rng=GSLRNG(seed=426482))
16.3.3 Recording
Previous versions of PyNN had three methods for recording from populations of neurons: record(), record_v()
and record_gsyn(), for recording spikes, membrane potentials, and synaptic conductances, respectively. There
was no official way to record any other state variables, for example the w variable from the adaptive-exponential
integrate-and-fire model, or when using native, non-standard models, although there were workarounds.
In PyNN 0.8, we have replaced these three methods with a single record() method, which takes the variable to
record as its first argument, e.g.:
p.record() # PyNN 0.7
p.record_v()
p.record_gsyn()
becomes:
p.record('spikes') # PyNN 0.8
p.record('v')
p.record(['gsyn_exc', 'gsyn_inh'])
Note that (1) you can now choose to record the excitatory and inhibitory synaptic conductances separately, (2) you can
give a list of variables to record. For example, you can record all the variables for the EIF_cond_exp_isfa_ista
model in a single command using:
p.record(['spikes', 'v', 'w', 'gsyn_exc', 'gsyn_inh'])
# PyNN 0.8
Note that the record_v() and record_gsyn() methods still exist, but their use is deprecated, and they will be
removed in the next version of PyNN.
A further change is that Population.record() has an optional sampling_interval argument, allowing recording
at intervals larger than the integration time step.
See Recording spikes and state variables for more details.
16.3.4 Retrieving recorded data
Perhaps the biggest change in PyNN 0.8 is that handling of recorded data, whether retrieval as Python objects or saving
to file, now uses the Neo package, which provides a common Python object model for neurophysiology data (whether
real or simulated).
Using Neo provides several advantages:
112
Chapter 16. Release notes
PyNN Documentation, Release 0.8.2
• data objects contain essential metadata, such as units, sampling interval, etc.;
• data can be saved to any of the file formats supported by Neo, including HDF5 and Matlab files;
• it is easier to handle data when running multiple simulations with the same network (calling reset() between
each one);
• it is possible to save multiple signals to a single file;
• better interoperability with other Python packages using Neo (for data analysis, etc.).
Note that Neo is based on NumPy, and most Neo data objects sub-class the NumPy ndarray class, so much of your
data handling code should work exactly the same as before.
See Data handling for more details.
16.3.5 Creating connections
In previous versions of PyNN, synaptic weights and delays were specified on creation of the Connector object. If the
synaptic weight had its own dynamics (whether short-term or spike-timing-dependent plasticity), the parameters for
this were specified on creation of a SynapseDynamics object. In other words, specification of synaptic parameters
was split across two different classes.
SynapseDynamics was also rather complex, and could have both a “fast” (for short-term synaptic depression
and facilitation) and “slow” (for long-term plasticity) component, although most simulator backends did not support
specifying both fast and slow components at the same time.
In PyNN 0.8, all synaptic parameters including weights and delays are given as arguments to a SynapseType subclass such as StaticSynapse or TsodyksMarkramSynapse. For example, instead of:
prj = Projection(p1, p2, AllToAllConnector(weights=0.05, delays=0.5))
# PyNN 0.7
you should now write:
prj = Projection(p1, p2, AllToAllConnector(), StaticSynapse(weight=0.05, delay=0.5))
# PyNN 0.8
and instead of:
params = {'U': 0.04, 'tau_rec': 100.0, 'tau_facil': 1000.0}
facilitating = SynapseDynamics(fast=TsodyksMarkramMechanism(**params))
# PyNN 0.7
prj = Projection(p1, p2, FixedProbabilityConnector(p_connect=0.1, weights=0.01),
synapse_dynamics=facilitating)
the following:
params = {'U': 0.04, 'tau_rec': 100.0, 'tau_facil': 1000.0, 'weight': 0.01}
facilitating = TsodyksMarkramSynapse(**params)
# PyNN 0.8
prj = Projection(p1, p2, FixedProbabilityConnector(p_connect=0.1),
synapse_type=facilitating)
Note that “weights” and “delays” are now “weight” and “delay”. In addition, the “method” argument to
Projection is now called “connector”, and the “target” argument is now “receptor_type”. The “rng” argument has been moved from Projection to Connector, and the “space” argument of Connector has been
moved to Projection.
The ability to specify both short-term and long-term plasticity for a given connection type, in a simulator-independent
way, has been removed, although in practice only the NEURON backend supported this. This functionality will be
reintroduced in PyNN 0.9. If you need this in the meantime, a workaround for the NEURON backend is to use a
NativeSynapseType mechanism - ask on the mailing list for guidance.
16.3. PyNN 0.8.0 release notes
113
PyNN Documentation, Release 0.8.2
Finally, the parameterization of STDP models has been modified. The A_plus and A_minus parameters have been
moved from the weight-dependence components to the timing-dependence components, since effectively they describe
the shape of the STDP curve independently of how the weight change depends on the current weight.
16.3.6 Specifying heterogeneous synapse parameters
As for neuron parameters, synapse parameter values can now be any of the following:
• a single number - sets the same value for all connections in the Projection;
• a RandomDistribution object - for each connection, sets a different random value drawn from the distribution;
• a list or 1D NumPy array of the same size as the Projection (although this is not very useful for random
networks, whose size may not be known in advance);
• a function that takes a single float argument; this function will be called with the distance between the pre- and
post-synaptic cell to return the parameter value for that cell.
16.3.7 Accessing, setting and saving properties of synaptic connections
In older versions of PyNN, the Projection class had a bunch of methods for working with
synaptic parameters: getWeights(), setWeights(), randomizeWeights(), printWeights(),
getDelays(), setDelays(), randomizeDelays(), printDelays(), getSynapseDynamics(),
setSynapseDynamics(), randomizeSynapseDynamics(), and saveConnections().
These have been replace by three methods: get(), set() and save(). The original methods still exist, but their
use is deprecated and they will be removed in the next version of PyNN. You should update your code as follows:
prj.getWeights(format='list')
prj.get('weight', format='list', with_address=False)
# PyNN 0.7
# PyNN 0.8
prj.randomizeDelays(rand_distr)
prj.set(delay=rand_distr)
# PyNN 0.7
# PyNN 0.8
prj.setSynapseDynamics('tau_rec', 50.0)
prj.set(tau_rec=50.0)
# PyNN 0.7
# PyNN 0.8
prj.printWeights('exc_weights.txt', format='array')
prj.save('weight', 'exc_weights.txt', format='array')
# PyNN 0.7
# PyNN 0.8
prj.saveConnections('exc_conn.txt')
prj.save('all', 'exc_conn.txt', format='list')
# PyNN 0.7
# PyNN 0.8
Also note that all three new methods can operate on several parameters at a time:
weights, delays = prj.getWeights('array'), prj.getDelays('array')
weights, delays = prj.get(['weight', 'delay'], format='array')
# PyNN 0.7
# PyNN 0.8
prj.randomizeWeights(rand_distr); prj.setDelays(0.4)
prj.set(weight=rand_distr, delay=0.4)
# PyNN 0.7
# PyNN 0.8
16.3.8 New and improved connectors
The library of Connector classes has been extended. The DistanceDependentProbabilityConnector
(DDPC) has been generalized, resulting in the IndexBasedProbabilityConnector, with which the connec114
Chapter 16. Release notes
PyNN Documentation, Release 0.8.2
tion probability can be specified as any function of the indices i and j of the pre- and post-synaptic neurons within
their populations. In addition, the distance expression for the DDPC can now be a callable object (such as a function)
as well as a string expression.
The ArrayConnector allows connections to be specified as an explicit boolean matrix, with shape (m, n) where m
is the size of the presynaptic population and n that of the postsynaptic population.
The CloneConnector takes the connection matrix from an existing Projection and uses it to create a new
Projection, with the option of changing the weights, delays, receptor type, etc.
The FromListConnector and FromFileConnector now support specifying any synaptic parameter (e.g. parameters of the synaptic plasticity rule), not just weight and delay.
The FixedNumberPostConnector now has an option with_replacement, which controls how the post-synaptic
population is sampled, and affects the incidence of multiple connections between pairs of neurons (“multapses”).
We have added a version of CSAConnector for the NEST backend that passes down the CSA object to PyNEST’s
CGConnect() function. This greatly speeds up CSAConnector with NEST.
16.3.9 Simulation control
Two new functions for advancing a simulation have been added: run_for() and run_until(). run_for() is
just an alias for run(). run_until() allows you to specify the absolute time at which a simulation should stop,
rather than the increment of time. In addition, it is now possible to specify a call-back function that should be called
at intervals during a run, e.g.:
>>>
...
...
>>>
The
The
The
The
def report_time(t):
print("The time is %g" % t)
return t + 100.0
run_until(300.0, callbacks=[report_time])
time is 0
time is 100
time is 200
time is 300
One potential use of this feature is to record synaptic weights during a simulation with synaptic plasticity.
The default value of the min_delay argument to setup() is now “auto”, which means that the simulator should
calculate the minimal synaptic delay itself. This change can lead to large speedups for NEST and NEURON code.
16.3.10 Simple plotting
We have added a small library to make it simple to produce simple plots of data recorded from a PyNN simulation.
This is not intended for publication-quality or highly-customized plots, but for basic visualization.
For example:
from pyNN.utility.plotting import Figure, Panel
...
population.record('spikes')
population[0:2].record(('v', 'gsyn_exc'))
...
data = population.get_data().segments[0]
16.3. PyNN 0.8.0 release notes
115
PyNN Documentation, Release 0.8.2
vm = data.filter(name="v")[0]
gsyn = data.filter(name="gsyn_exc")[0]
Figure(
Panel(vm, ylabel="Membrane potential (mV)"),
Panel(gsyn, ylabel="Synaptic conductance (uS)"),
Panel(data.spiketrains, xlabel="Time (ms)", xticks=True)
).save("simulation_results.png")
16.3.11 Supported backends
PyNN 0.8.0 is compatible with NEST versions 2.6 to 2.8, NEURON versions 7.3 to 7.4, and Brian 1.4. Support for
Brian 2 is planned for a future release.
Support for the PCSIM simulator has been dropped since the simulator appears to be no longer actively developed.
116
Chapter 16. Release notes
PyNN Documentation, Release 0.8.2
The default precision for the NEST backend has been changed to “off_grid”. This reflects the PyNN philosophy that
defaults should prioritize accuracy and compatibility over performance. (We think performance is very important, it’s
just that any decision to risk compromising accuracy or interoperability should be made deliberately by the end user.)
The Izhikevich neuron model is now available for all backends.
16.3.12 Python compatibility
Support for Python 3 has been added (versions 3.3+). Support for Python versions 2.5 and earlier has been dropped.
16.3.13 Changes for developers
Other than the internal refactoring, the main change for developers is that we have switched from Subversion to Git.
PyNN development now takes place at https://github.com/NeuralEnsemble/PyNN/ We are now taking advantage of
the integration of GitHub with TravisCI, to automatically run the test suite whenever changes are pushed to GitHub.
16.4 PyNN 0.8.0 release candidate 1 release notes
August 19th 2015
Welcome to the first release candidate of PyNN 0.8.0!
For full information about what’s new in PyNN 0.8, see the PyNN 0.8 alpha 1 release notes, PyNN 0.8 beta 1 release
notes and PyNN 0.8 beta 2 release notes
16.4.1 NEST 2.6
The main new feature in this release is support for NEST 2.6. Previous versions of NEST are no longer supported.
16.4.2 Other changes
• Travis CI now runs system tests as well as unit tests.
• Assorted bug fixes
16.5 PyNN 0.8 beta 2 release notes
January 6th 2015
Welcome to the second beta release of PyNN 0.8!
For full information about what’s new in PyNN 0.8, see the PyNN 0.8 alpha 1 release notes and PyNN 0.8 beta 1
release notes.
16.5.1 NEST 2.4
The main new feature in this release is support for NEST 2.4. Previous versions of NEST are no longer supported.
16.4. PyNN 0.8.0 release candidate 1 release notes
117
PyNN Documentation, Release 0.8.2
16.5.2 Python 3
With the rewrite of PyNEST in NEST 2.4, PyNN now has two backend simulators (NEURON being the other) that
support Python 3. There was really no longer any excuse not to add Python 3 support to PyNN, which turned out to
be very straightforward.
16.5.3 Standardization of random distributions
Since its earliest versions PyNN has supported swapping in and out different random number generators, but until now
there has been no standardization of these RNGs; for example the GSL random number library uses “gaussian” where
NumPy uses “normal”. This limited the usefulness of this feature, especially for the NativeRNG class, which signals
that random number generation should be passed down to the simulator backend rather than being done at the Python
level.
This has now, finally, been fixed. The names of random number distributions and of their parameters have now been
standardized, based for the most part on the nomenclature used by Wikipedia. A quick example:
from pyNN.random import NumpyRNG, GSLRNG, RandomDistribution
rd1 = RandomDistribution('normal' mu=0.5, sigma=0.1, rng=NumpyRNG(seed=922843))
rd2 = RandomDistribution('normal' mu=0.5, sigma=0.1, rng=GSLRNG(seed=426482))
16.5.4 API changes
• Population.record() now has an optional sampling_interval argument, allowing recording at intervals
larger than the integration time step.
• FixedNumberPostConnector now has an option with_replacement, which controls how the post-synaptic
population is sampled, and affects the incidence of multiple connections between pairs of neurons (“multapses”).
• The default value of the min_delay argument to setup() is now “auto”, which means that the simulator should
calculate the minimal synaptic delay itself. This change can lead to large speedups for NEST and NEURON
code.
16.5.5 Other changes
• Reimplemented Izhikevich model for NEURON to allow current injection.
• Connectors that can generate multiple connections between a given pair of neurons (“multapses”) now work
properly with the pyNN.nest backend.
• Added a version of CSAConnector for the NEST backend that passes down the CSA object to PyNEST’s
CGConnect() function. This greatly speeds up CSAConnector with NEST.
• Added some new example scripts, deleted some of the more trivial, repetitive examples, and merged the several
variants of the “VAbenchmarks” example into a single script.
• When data blocks from different MPI nodes are merged, the spike trains are now by default sorted by neuron
ID. If this sorting proves to be too time-consuming we can in future expose sort/don’t sort as an option.
• Added IF_cond_exp_gsfa_grr standard model (integrate and fire neuron with spike frequency adaption
and relative refractory period) to Brian backend, and fixed broken HH_cond_exp model.
• Improvements to callback handling.
• Assorted bug fixes
118
Chapter 16. Release notes
PyNN Documentation, Release 0.8.2
16.6 PyNN 0.8 beta 1 release notes
November 15th 2013
Welcome to the first beta release of PyNN 0.8!
For full information about what’s new in PyNN 0.8, see the PyNN 0.8 alpha 1 release notes.
16.6.1 Brian backend
The main new feature in this beta release is the reappearance of the Brian backend, updated to work with the 0.8 API.
You will need version 1.4 of Brian. There are still some rough edges, but we encourage anyone who has used Brian
with PyNN 0.7 to try updating your scripts now, and give it a try.
16.6.2 New and improved connectors
The library of Connector classes has been extended. The DistanceDependentProbabilityConnector
(DDPC) has been generalized, resulting in the IndexBasedProbabilityConnector, with which the connection probability can be specified as any function of the indices i and j of the pre- and post-synaptic neurons within
their populations. In addition, the distance expression for the DDPC can now be a callable object (such as a function)
as well as a string expression.
The ArrayConnector allows connections to be specified as an explicit boolean matrix, with shape (m, n) where m
is the size of the presynaptic population and n that of the postsynaptic population.
The CSAConnector has been updated to work with the 0.8 API.
The FromListConnector and FromFileConnector now support specifying any synaptic parameter (e.g. parameters of the synaptic plasticity rule), not just weight and delay.
16.6.3 API changes
The set() function now matches the Population.set() method, i.e. it takes one or more parameter name/value
pairs as keyword arguments.
Two new functions for advancing a simulation have been added: run_for() and run_until(). run_for() is
just an alias for run(). run_until() allows you to specify the absolute time at which a simulation should stop,
rather than the increment of time. In addition, it is now possible to specify a call-back function that should be called
at intervals during a run, e.g.:
>>>
...
...
>>>
The
The
The
The
def report_time(t):
print("The time is %g" % t)
return t + 100.0
run_until(300.0, callbacks=[report_time])
time is 0
time is 100
time is 200
time is 300
One potential use of this feature is to record synaptic weights during a simulation with synaptic plasticity.
We have changed the parameterization of STDP models. The A_plus and A_minus parameters have been moved from
the weight-dependence components to the timing-dependence components, since effectively they describe the shape
of the STDP curve independently of how the weight change depends on the current weight.
16.6. PyNN 0.8 beta 1 release notes
119
PyNN Documentation, Release 0.8.2
16.6.4 Simple plotting
We have added a small library to make it simple to produce simple plots of data recorded from a PyNN simulation.
This is not intended for publication-quality or highly-customized plots, but for basic visualization.
For example:
from pyNN.utility.plotting import Figure, Panel
...
population.record('spikes')
population[0:2].record(('v', 'gsyn_exc'))
...
data = population.get_data().segments[0]
vm = data.filter(name="v")[0]
gsyn = data.filter(name="gsyn_exc")[0]
Figure(
Panel(vm, ylabel="Membrane potential (mV)"),
Panel(gsyn, ylabel="Synaptic conductance (uS)"),
Panel(data.spiketrains, xlabel="Time (ms)", xticks=True)
).save("simulation_results.png")
120
Chapter 16. Release notes
PyNN Documentation, Release 0.8.2
16.6.5 Gap junctions
The NEURON backend now supports gap junctions. This is not yet an official part of the PyNN API, since any official
feature must be supported by at least two backends, but could be very useful to modellers using NEURON.
16.6.6 Other changes
The default precision for the NEST backend has been changed to “off_grid”. This reflects the PyNN philosophy that
defaults should prioritize accuracy and compatibility over performance. (We think performance is very important, it’s
just that any decision to risk compromising accurary or interoperability should be made deliberately by the end user.)
The Izhikevich neuron model is now available for the NEURON, NEST and Brian backends, although there are still
some problems with injecting current when using the NEURON backend.
16.6. PyNN 0.8 beta 1 release notes
121
PyNN Documentation, Release 0.8.2
A whole bunch of bugs have been fixed: see the issue tracker.
16.6.7 For developers
We are now taking advantage of the integration of GitHub with TravisCI, to automatically run the suite of unit tests
whenever changes are pushed to GitHub. Note that this does not run the system tests or any other tests that require
installation of a simulator backend.
16.7 PyNN 0.8 alpha 2 release notes
May 24th 2013
Welcome to the second alpha release of PyNN 0.8!
For full information about what’s new in PyNN 0.8, see the PyNN 0.8 alpha 1 release notes.
This second alpha is mostly just a bug-fix release, although we have added a new class, CloneConnector (thanks
to Tom Close), which takes the connection matrix from an existing Projection and uses it to create a new
Projection, with the option of changing the weights, delays, receptor type, etc.
The other big change for developers is that we have switched from Subversion to Git. PyNN development now takes
place at https://github.com/NeuralEnsemble/PyNN/
16.8 PyNN 0.8 alpha 1 release notes
July 31st 2012
Welcome to the first alpha release of PyNN 0.8! This is the first time there has been an alpha or beta release of PyNN.
In the past it hasn’t seemed necessary, at first because few people were using PyNN for their research and those that
were understood well that PyNN was in an early stage of development, more recently because most of the changes
were either extensions to the API or due to internal refactoring.
For PyNN 0.8 we have taken the opportunity to make significant, backward-incompatible changes to the API. The aim
was fourfold:
• to simplify the API, making it more consistent and easier to remember;
• to make the API more powerful, so more complex models can be expressed with less code;
• to allow a number of internal simplifications so it is easier for new developers to contribute;
• to prepare for planned future extensions, notably support for multi-compartmental models.
Since there have been so many changes, it seemed prudent to have a number of development releases before the final
release of 0.8.0, to get as much testing from users as possible. There may be more alpha releases, and there will be at
least one beta release.
This alpha release of PyNN is not intended for use in your research. If you have existing PyNN scripts, please install
PyNN 0.8 alpha separately to your current PyNN installation (for example using virtualenv) and update your scripts,
as outlined below, in a separate branch of your version control repository. If you find a bug, or if PyNN 0.8 alpha gives
different results to PyNN 0.7, please let us know using the bug tracker or on the mailing list.
Warning: The first alpha release only supports NEURON and NEST. Support for Brian, PCSIM, NeuroML and
MOOSE will be restored/added before the final 0.8.0 release.
122
Chapter 16. Release notes
PyNN Documentation, Release 0.8.2
16.8.1 Creating populations
In previous versions of PyNN, the Population constructor was called with the population size, a BaseCellType
sub-class such as IF_cond_exp and a dictionary of parameter values. For example:
p = Population(1000, IF_cond_exp, {'tau_m'=12.0, 'cm': 0.8})
# PyNN 0.7
This dictionary was passed to the cell-type class constructor within the Population constructor to create a cell-type
instance.
The reason for doing this was that in early versions of PyNN, use of native NEST models was supported by passing a
string, the model name, as the cell-type argument. Since PyNN 0.7, however, native models have been supported with
the NativeCellType class, and passing a string is no longer allowed.
It makes more sense, therefore, for the cell-type instance to be created by the user, and to pass a cell-type instance,
rather than a cell-type class, to the Population constructor.
There is also a second change: specification of parameters for cell-type classes is now done via keyword arguments
rather than a single parameter dictionary. This is for consistency with current sources and synaptic plasticity models,
which already use keyword arguments.
The example above should be rewritten as:
p = Population(1000, IF_cond_exp(tau_m=12.0, cm=0.8))
# PyNN 0.8
or:
p = Population(1000, IF_cond_exp(**{'tau_m'=12.0, 'cm': 0.8}))
# PyNN 0.8
or:
cell_type = IF_cond_exp(tau_m=12.0, cm=0.8)
p = Population(1000, cell_type)
# PyNN 0.8
The first form, with a separate parameter dictionary, is still supported for the time being, but is deprecated and may be
removed in future versions.
16.8.2 Specifying heterogeneous parameter values
In previous versions of PyNN, the Population constructor supported setting parameters to either homogeneous
values (all cells in the population have the same value) or random values. After construction, it was possible to change
parameters using the Population.set(), Population.tset() (for topographic set - parameters were set by
using an array of the same size as the population) and Population.rset() (for random set) methods.
In PyNN 0.8, setting parameters is simpler and more consistent, in that both when constructing a cell type for use in
the Population constructor (see above) and in the Population.set() method, parameter values can be any of
the following:
• a single number - sets the same value for all cells in the Population;
• a RandomDistribution object - for each cell, sets a different random value drawn from the distribution;
• a list or 1D NumPy array of the same size as the Population;
• a function that takes a single integer argument; this function will be called with the index of every cell in the
Population to return the parameter value for that cell.
See Model parameters and initial values for more details and examples.
The call signature of the Population.set() method has also been changed; now parameters should be specified
as keyword arguments. For example, instead of:
16.8. PyNN 0.8 alpha 1 release notes
123
PyNN Documentation, Release 0.8.2
p.set("tau_m": 20.0)
# PyNN 0.7
use:
p.set(tau_m=20.0)
# PyNN 0.8
and instead of:
p.set({"tau_m": 20.0, "v_rest": -65})
# PyNN 0.7
use:
p.set(tau_m=20.0, v_rest=-65)
# PyNN 0.8
Now that Population.set() accepts random distributions and arrays as arguments, the Population.tset()
and Population.rset() methods are superfluous. As of version 0.8, their use is deprecated and they will probably be removed in the next version of PyNN. Their use can be replaced as follows:
p.tset("i_offset", arr)
p.set(i_offset=arr)
# PyNN 0.7
# PyNN 0.8
p.rset("tau_m": rand_distr)
p.set(tau_m=rand_distr)
# PyNN 0.7
# PyNN 0.8
Setting spike times
Where a single parameter value is already an array, e.g. spike times, this should be wrapped by a Sequence object.
For example, to generate a different Poisson spike train for every neuron in a population of SpikeSourceArrays:
def generate_spike_times(i_range):
return [Sequence(numpy.add.accumulate(numpy.random.exponential(10.0, size=10)))
for i in i_range]
p = sim.Population(30, sim.SpikeSourceArray(spike_times=generate_spike_times))
16.8.3 Recording
Previous versions of PyNN had three methods for recording from populations of neurons: record(), record_v()
and record_gsyn(), for recording spikes, membrane potentials, and synaptic conductances, respectively. There
was no official way to record any other state variables, for example the w variable from the adaptive-exponential
integrate-and-fire model, or when using native, non-standard models, although there were workarounds.
In PyNN 0.8, we have replaced these three methods with a single record() method, which takes the variable to
record as its first argument, e.g.:
p.record() # PyNN 0.7
p.record_v()
p.record_gsyn()
becomes:
p.record('spikes') # PyNN 0.8
p.record('v')
p.record(['gsyn_exc', 'gsyn_inh'])
Note that (1) you can now choose to record the excitatory and inhibitory synaptic conductances separately,
(2) you can give a list of variables to record, so, for example, you can record all the variables for the
EIF_cond_exp_isfa_ista model in a single command using:
124
Chapter 16. Release notes
PyNN Documentation, Release 0.8.2
p.record(['spikes', 'v', 'w', 'gsyn_exc', 'gsyn_inh'])
# PyNN 0.8
Note that the record_v() and record_gsyn() methods still exist, but their use is deprecated, and they will be
removed in the next version of PyNN.
See Recording spikes and state variables for more details.
16.8.4 Retrieving recorded data
Perhaps the biggest change in PyNN 0.8 is that handling of recorded data, whether retrieval as Python objects or saving
to file, now uses the Neo package, which provides a common Python object model for neurophysiology data (whether
real or simulated).
Using Neo provides several advantages:
• data objects contain essential metadata, such as units, sampling interval, etc.;
• data can be saved to any of the file formats supported by Neo, including HDF5 and Matlab files;
• it is easier to handle data when running multiple simulations with the same network (calling reset() between
each one);
• it is possible to save multiple signals to a single file;
• better interoperability with other Python packages using Neo (for data analysis, etc.).
Note that Neo is based on NumPy, and most Neo data objects sub-class the NumPy ndarray class, so much of your
data handling code should work exactly the same as before.
See Data handling for more details.
16.8.5 Creating connections
In previous versions of PyNN, synaptic weights and delays were specified on creation of the Connector object. If the
synaptic weight had its own dynamics (whether short-term or spike-timing-dependent plasticity), the parameters for
this were specified on creation of a SynapseDynamics object. In other words, specification of synaptic parameters
was split across two different classes.
SynapseDynamics was also rather complex, and could have both a “fast” (for short-term synaptic depression
and facilitation) and “slow” (for long-term plasticity) component, although most simulator backends did not support
specifying both fast and slow components at the same time.
In PyNN 0.8, all synaptic parameters including weights and delays are given as arguments to a SynapseType subclass such as StaticSynapse or TsodyksMarkramSynapse. For example, instead of:
prj = Projection(p1, p2, AllToAllConnector(weights=0.05, delays=0.5))
# PyNN 0.7
you should now write:
prj = Projection(p1, p2, AllToAllConnector(), StaticSynapse(weight=0.05, delay=0.5))
# PyNN 0.8
and instead of:
params = {'U': 0.04, 'tau_rec': 100.0, 'tau_facil': 1000.0}
facilitating = SynapseDynamics(fast=TsodyksMarkramMechanism(**params))
# PyNN 0.7
prj = Projection(p1, p2, FixedProbabilityConnector(p_connect=0.1, weights=0.01),
synapse_dynamics=facilitating)
the following:
16.8. PyNN 0.8 alpha 1 release notes
125
PyNN Documentation, Release 0.8.2
params = {'U': 0.04, 'tau_rec': 100.0, 'tau_facil': 1000.0, 'weight': 0.01}
facilitating = TsodyksMarkramSynapse(**params))
# PyNN 0.8
prj = Projection(p1, p2, FixedProbabilityConnector(p_connect=0.1),
synapse_type=facilitating)
Note that “weights” and “delays” are now “weight” and “delay”. In addition, the “method” argument to
Projection is now called “connector”, and the “target” argument is now “receptor_type”. The “rng” argument has been moved from Projection to Connector, and the “space” argument of Connector has been
moved to Projection.
The ability to specify both short-term and long-term plasticity for a given connection type, in a simulator-independent
way, has been removed, although in practice only the NEURON backend supported this. This functionality will be
reintroduced in PyNN 0.9. If you need this in the meantime, a workaround for the NEURON backend is to use a
NativeSynapseType mechanism - ask on the mailing list for guidance.
16.8.6 Specifying heterogeneous synapse parameters
As for neuron parameters, synapse parameter values can now be any of the following:
• a single number - sets the same value for all connections in the Projection;
• a RandomDistribution object - for each connection, sets a different random value drawn from the distribution;
• a list or 1D NumPy array of the same size as the Projection (although this is not very useful for random
networks, whose size may not be known in advance);
• a function that takes a single float argument; this function will be called with the distance between the pre- and
post-synaptic cell to return the parameter value for that cell.
16.8.7 Accessing, setting and saving properties of synaptic connections
In older versions of PyNN, the Projection class had a bunch of methods for working with
synaptic parameters: getWeights(), setWeights(), randomizeWeights(), printWeights(),
getDelays(), setDelays(), randomizeDelays(), printDelays(), getSynapseDynamics(),
setSynapseDynamics(), randomizeSynapseDynamics(), and saveConnections().
These have been replace by three methods: get(), set() and save(). The original methods still exist, but their
use is deprecated and they will be removed in the next version of PyNN. You should update your code as follows:
prj.getWeights(format='list')
prj.get('weight', format='list', with_address=False)
# PyNN 0.7
# PyNN 0.8
prj.randomizeDelays(rand_distr)
prj.set(delay=rand_distr)
# PyNN 0.7
# PyNN 0.8
prj.setSynapseDynamics('tau_rec', 50.0)
prj.set(tau_rec=50.0)
# PyNN 0.7
# PyNN 0.8
prj.printWeights('exc_weights.txt', format='array')
prj.save('weight', 'exc_weights.txt', format='array')
# PyNN 0.7
# PyNN 0.8
prj.saveConnections('exc_conn.txt')
prj.save('all', 'exc_conn.txt', format='list')
# PyNN 0.7
# PyNN 0.8
Also note that all three new methods can operate on several parameters at a time:
126
Chapter 16. Release notes
PyNN Documentation, Release 0.8.2
weights, delays = prj.getWeights('array'), prj.getDelays('array')
weights, delays = prj.get(['weight', 'delay'], format='array')
# PyNN 0.7
# PyNN 0.8
prj.randomizeWeights(rand_distr); prj.setDelays(0.4)
prj.set(weight=rand_distr, delay=0.4)
# PyNN 0.7
# PyNN 0.8
16.8.8 Python compatibility
With an eye towards future support for Python 3, we have decided to drop support for Python versions 2.5 and earlier
in PyNN 0.8.
16.9 PyNN 0.7 release notes
4th February 2011
This release sees a major extension of the API with the addition of the PopulationView and Assembly classes,
which aim to make building large, structured networks much simpler and cleaner. A PopulationView allows
a sub-set of the neurons from a Population to be encapsulated in an object. We call it a “view”, rather than a
“sub-population”, to emphasize the fact that the neurons are not copied: they are the same neurons as in the parent
Population, and any operations on either view or parent (setting parameter values, recording, etc.) will be reflected
in the other. An Assembly is a list of Population and/or PopulationView objects, enabling multiple cell
types to be encapsulated in a single object. PopulationView and Assembly objects behave in most ways like
Population: you can record them, connect them using a Projection, you can have views of views...
The “low-level API” (rechristened “procedural API”) has been reimplemented in in terms of Population and
Projection. For example, create() now returns a Population object rather than a list of IDs, and
connect() returns a Projection object. This change should be almost invisible to the user, since Population
now behaves very much like a list of IDs (can be sliced, joined, etc.).
There has been a major change to cell addressing: Populations now always store cells in a one-dimensional array,
which means cells no longer have an address but just an index. To specify the spatial structure of a Population,
pass a Structure object to the constructor, e.g.:
p = Population((12,10), IF_cond_exp)
is now:
p = Population(120, IF_cond_exp, structure=Grid2D(1.2))
although the former syntax still works, for backwards compatibility. The reasons for doing this are:
1. we can now have more interesting structures than just grids
2. efficiency (less juggling addresses, flattening)
3. simplicity (less juggling addresses, less code).
The API for setting initial values has changed: this is now done via the initialize() function or the
Population.initialize() method, rather than by having v_init and similar parameters for cell models.
16.9.1 Other API changes
• simplification of the record_X() methods.With the addition of the PopulationView class, the selection
logic implemented by the record_from and rng arguments duplicated that in Population.__getitem__()
and Population.sample(), and so these arguments have been removed, and the record_X() methods
16.9. PyNN 0.7 release notes
127
PyNN Documentation, Release 0.8.2
now record all neurons within a Population, PopulationView or Assembly. Examples of syntax
changes:
pop.record_v([pop[0], pop[17]]) --> pop[(0, 17)].record_v()
pop.record(10, rng=rng) --> pop.sample(10, rng).record()
• enhanced describe() methods: can now use Jinja2 or Cheetah templating engines to produce much nicer,
better formatted network descriptions.
• connections and neuron positions can now be saved to various binary formats as well as to text files.
• added some new connectors: SmallWorldConnector and CSAConnector (CSA = Connection Set Algebra)
• native neuron and synapse models are now supported using a NativeModelType subclass, rather than specified as strings. This simplifies the code internally and increases the range of PyNN functionality that can be
used with native models (e.g. you can now record any variable from a native NEST or NEURON model). For
NEST, there is a class factory native_cell_type(), for NEURON the NativeModelType subclasses
have to be written by hand.
16.9.2 Backend changes
• the NEST backend has been updated to work with NEST version 2.0.0.
• the Brian backend has seen extensive work on performance and on bringing it to feature parity with the other
backends.
16.9.3 Details
• Where Population.initial_values() contains arrays, these arrays now consistently contain only
enough values for local cells. Before, there was some inconsistency about how this was handled. Still need
more tests to be sure it’s really working as expected.
• Allow override of default_maxstep for NEURON backend as setup paramter. This is for the case that the user
wants to add network connections across nodes after simulation start time.
• Discovered that when using NEST with mpi4py, you must import nest first and let it do the MPI initialization. The only time this seems to be a problem with PyNN is if a user imports pyNN.random before
pyNN.nest. It would be nice to handle this more gracefully, but for now I’ve just added a test that NEST and
mpi4py agree on the rank, and a hopefully useful error message.
• Added a new setup() option for pyNN.nest: recording_precision. By default, recording_precision is 3 for
on-grid and 15 for off-grid.
• Partially fixed the pyNN.nest implementation of TsodyksMarkramMechanism (cf ticket:172). The
‘tsodyks_synapse’ model has a ‘tau_psc’ parameter, which should be set to the same value as the decay time
constant of the post-synaptic current (which is a parameter of the neuron model). I consider this only a partial
fix, because if ‘tau_syn_E’ or ‘tau_syn_I’ is changed after the creation of the Projection, ‘tau_psc’ will not be
updated to match (unlike in the pyNN.neuron implementation. I’m also not sure how well it will work with
native neuron models.
• reverted pyNN.nest to reading/resetting the current time from the kernel rather than keeping track of it within
PyNN. NEST warns that this is dangerous, but all the tests pass, so let’s wait and see.
• In HH_cond_exp, conductances are now in µS, as for all other conductances in PyNN, instead of nS.
• NEURON now supports Tsodyks-Markram synapses for current-based exponential synapses (before it was only
for conductance-based).
128
Chapter 16. Release notes
PyNN Documentation, Release 0.8.2
• NEURON backend now supports the IF_cond_exp_gsfa_grr model.
• Added a sample() method to Population, which returns a PopulationView of a random sample of
the neurons in the parent population.
• Added the EIF_cond_exp/alpha_isfa/ista and HH_cond_exp standard models in Brian.
• Added a gather option to the Population.get() method.
• brian.setup() now accepts a number of additional arguments in extra_params, For example,
extra_params={’useweave’: True} will lead to inline C++ code generation
• Wrote a first draft of a developers’ guide.
• Considerably extended the core.LazyArray class, as a basis for a possible rewrite of the connectors module.
• The random module now uses mpi4py to determine the MPI rank and num_processes, rather than receiving
these as arguments to the RNG constructor (see ticket:164).
• Many fixes and performance enhancements for the brian module, which now supports synaptic plasticity.
• No more GSL warning every time! Just raise an Exception if we attempt to use GSLRNG and pygsl is not
available.
• Added some more flexibility to init_logging(): logfile=None -> stderr, format includes size & rank,
user can override log-level
• NEST __init__.py changed to query NEST for filling NEST_SYNAPSE_TYPES.
• Started to move synapse dynamics related stuff out of Projection and into the synapse dynamics-related
classes, where it belongs.
• Added a new “spike_precision” option to nest.setup() (see http://neuralensemble.org/trac/PyNN/wiki/SimulatorSpecificOptio
• Updated the NEST backend to work with version 2.0.0
• Rewrote the test suite, making a much cleaner distinction between unit tests, which now make heavy
use of mock objects to better-isolate components, and system tests. Test suite now runs with nose
(https://nose.readthedocs.org/en/latest/), in order to facilitate continuous integration testing.
• Changed the format of connection files, as written by saveConnections() and read by
FromFileConnector: files no longer contain the population label.
Connections can now also
be written to NumpyBinaryFile or PickleFile objects, instead of just text files. Same for
Population.save_positions().
• Added CSAConnector, which wraps the Connection Set Algebra for use by PyNN. Requires the csa package:
https://pypi.python.org/pypi/csa/
• Enhanced distance expressions by allowing expressions such as (d[0] < 0.1) & (d[1] < 0.2). Complex forms can therefore now be drawn, such as squares, ellipses, and so on.
• Added an n_connections flag to the DistanceDependentProbabiblityConnector in order to be able
to constrain the total number of connections. Can be useful for normalizations.
• Added a simple SmallWorldConnector. Cells are connected within a certain degree d. Then, all the
connections are rewired with a probability given by a rewiring parameter and new targets are uniformly selected
among all the possible targets.
• Added a method to save cell positions to file.
• Added a progress bar to connectors. Now, a verbose flag allows to display or not a progress bar indicating the
percentage of connections established.
16.9. PyNN 0.7 release notes
129
PyNN Documentation, Release 0.8.2
• New implementation of the connector classes, with much improved performance and scaling with MPI, and
extension of distance-dependent weights and delays to all connectors. In addition, a safe flag has been added to
all connectors: on by default, a user can turn it off to avoid tests on weights and delays.
• Added the ability to set the atol and rtol parameters of NEURON’s cvode solver in the extra_params argument
of setup() (patch from Johannes Partzsch).
• Made pyNN.nest‘s handling of the refractory period consistent with the other backends. Made the default
refractory period 0.1 ms rather than 0.0 ms, since NEST appears not to handle zero refractory period.
• Moved standard model (cells and synapses) machinery, the Space class, and Error classes out of common
into their own modules.
16.9.4 Release 0.7.1
This bug-fix release added copyright statements to all files, together with some minor bug fixes.
16.9.5 Release 0.7.2
16.9.6 Release 0.7.3
16.9.7 Release 0.7.4
16.9.8 Release 0.7.5
16.10 PyNN 0.6 release notes
14th February 2010
Welcome to PyNN 0.6!
There have been three major changes to the API in this version.
• Spikes, membrane potential and synaptic conductances can now be saved to file in various binary formats. To
do this, pass a PyNN File object to Population.print_X(), instead of a filename. There are various
types of PyNN File object, defined in the recording.files module, e.g., StandardTextFile,
PickleFile, NumpyBinaryFile, HDF5ArrayFile.
• Added a reset() function and made the behaviour of setup() consistent across simulators. reset() sets
the simulation time to zero and sets membrane potentials to their initial values, but does not change the network
structure. setup() destroys any previously defined network.
• The possibility of expressing distance-dependent weights and delays was extended to the
AllToAllConnector and FixedProbabilityConnector classes. To reduce the number of arguments to the constructors, the arguments affecting the spatial topology (periodic boundary conditions, etc.)
were moved to a new Space class, so that only a single Space instance need be passed to the Connector
constructor.
16.10.1 Details
• Switched to using the point process-based AdExp mechanism in NEURON.
• Factored out most of the commonality between the Recorder classes of each backend into a parent class
recording.Recorder, and tidied up the recording module.
130
Chapter 16. Release notes
PyNN Documentation, Release 0.8.2
• Added an attribute conductance_based to StandardCellType, to make the determination of synapse
type for a given cell more robust.
• PyNN now uses a named logger, which makes it easier to control logging levels when using PyNN within a
larger application.
• implemented gather for Projection.saveConnections()
• Added a test script (test_mpi.py) to check whether serial and distributed simulations give the same results
• Added a size() method to Projection, to give the total number of connections across all nodes (unlike
__len__(), which gives only the connections on the local node
• Speeded up record() by a huge factor (from 10 s for 12000 cells to less than 0.1 s) by removing an unecessary
conditional path (since all IDs now have an attribute “local”)
• synapse_type is now passed to the ConnectionManager constructor, not to the connect() method, since
(a) it is fixed for a given connection manager, (b) it is needed in other methods than just connect(); fixed
weight unit conversion in brian module.
• Updated connection handling in nest module to work with NEST version 1.9.8498. Will not now work with
previous NEST versions
• The neuron back-end now supports having both static and Tsodyks-Markram synapses on the same neuron
(previously, the T-M synapses replaced the static synapses) - in agreement with nest and common sense.
Thanks to Bartosz Telenczuk for reporting this.
• Added a compatible_output mode for the saveConnections() method. True by default, it allows connections to be reloaded from a file. If False, then the raw connections are stored, which makes for easier
postprocessing.
• Added an ACSource current source to the nest module.
• Fixed Hoc build directory problem in setup.py - see ticket:147
• Population.get_v() and the other “get” methods now return cell indices (starting from 0) rather than cell
IDs. This behaviour now matches that of Population.print_v(), etc. See ticket:119 if you think this is
a bad idea.
• Moved the base Connector class from common to connectors. Put the distances() function inside a
Space class, to allow more convenient specification of topology parameters.
• Projection.setWeights() and setDelays() now accept a 2D array argument (ref ticket:136), to be
symmetric with getWeights() and getDelays(). For distributed simulations, each node only takes the
values it needs from the array.
• FixedProbabilityConnector is now more strict, and checks that p_connect is less than 1 (see
ticket:148). This makes no difference to the behaviour, but could act as a check for errors in user code.
• Fixed problem with changing SpikeSourcePoisson rate during a simulation (see ticket:152)
16.10. PyNN 0.6 release notes
131
PyNN Documentation, Release 0.8.2
132
Chapter 16. Release notes
CHAPTER 17
Developers’ Guide
17.1 Developers’ guide
This guide contains information about contributing to PyNN development, and aims to explain the overall architecture
and some of the internal details of the PyNN codebase.
PyNN is open-source software, with a community-based development model: contributions from users are welcomed,
and the direction that PyNN development should take in the future is determined by the needs of its users.
There are several ways to contribute to PyNN:
• reporting bugs, errors and other mistakes in the code or documentation;
• making suggestions for improvements;
• fixing bugs and other mistakes;
• adding or maintaining a simulator backend;
• major refactoring to improve performance, reduce code complexity, or both.
The following sections contain guidelines for each of these.
17.1.1 Bug reports and feature requests
If you find a bug or would like to add a new feature to PyNN, please go to
https://github.com/NeuralEnsemble/PyNN/issues/. First check that there is not an existing ticket for your bug
or request, then click on “New issue” to create a new ticket (you will need a GitHub account, but creating one is
simple and painless).
17.1.2 Contributing to PyNN
Mailing list
Discussions about PyNN take place in the NeuralEnsemble Google Group.
Setting up a development environment
We strongly suggest you work in a virtual environment, e.g. using virtualenv or Anaconda.
133
PyNN Documentation, Release 0.8.2
Requirements
In addition to the requirements listed in Installation, you will need to install:
• nose
• mock
• coverage
to run tests, and:
• Sphinx
• matplotlib
to build the documentation.
Code checkout
PyNN development is based around GitHub. Once you have a GitHub account, you should fork the official PyNN
repository, and then clone your fork to your local machine:
$ git clone https://github.com/<username>/PyNN.git pyNN_dev
To work on the development version:
$ git checkout master
To work on the latest stable release (for bug-fixes):
$ git checkout --track origin/0.8
To keep your PyNN repository up-to-date with respect to the official repository, add it as a remote:
$ git remote add upstream https://github.com/NeuralEnsemble/PyNN.git
and then you can pull in any upstream changes:
$ git pull upstream master
To get PyNN onto your PYTHONPATH there are many options, such as:
• pip editable mode (pip install -e /path/to/PyNN)
• creating a symbolic link named pyNN from somewhere that is already on your PYTHONPATH, such as the
site-packages directory, to the pyNN_trunk/pyNN directory.
If you are developing with NEURON, don’t forget to compile the NMODL files in pyNN/neuron/nmodl by running nrnivmodl, and to recompile any time you modify any of them.
Coding style
We try to stay fairly close to PEP8. Please note in particular:
• indentation of four spaces, no tabs
• single space around most operators, but no space around the ‘=’ sign when used to indicate a keyword argument
or a default parameter value.
134
Chapter 17. Developers’ Guide
PyNN Documentation, Release 0.8.2
• some function/method names in PyNN use mixedCase, but these will gradually be deprecated and replaced
with lower_case_with_underscores. Any new functions or methods should use the latter.
• we currently target versions 2.6, 2.7 and 3.4
Testing
Running the PyNN test suite requires the nose_ and mock_ packages, and optionally the coverage_ package. To run
the entire test suite, in the test subdirectory of the source tree:
$ nosetests
To see how well the codebase is covered by the tests, run:
$ nosetests --with-coverage --cover-package=pyNN --cover-erase --cover-html
There are currently two sorts of tests, unit tests, which aim to exercise small pieces of code such as individual functions
and methods, and system tests, which aim to test that all the pieces of the system work together as expected.
If you add a new feature to PyNN, or fix a bug, you should write both unit and system tests.
Unit tests should where necessary make use of mock/fake/stub/dummy objects to isolate the component under test as
well as possible. The pyNN.mock module is a complete mock simulator backend that may be used for this purpose.
Except when testing a specific simulator interface, unit tests should be able to run without a simulator installed.
System tests should be written so that they can run with any of the simulators. The suggested way to do this is to
write test functions, in a separate file, that take a simulator module as an argument, and then call these functions from
test_neuron.py, test_nest.py, etc.
The test/unsorted directory contains a number of old tests that are either no longer useful or have not yet been
adapted to the nose framework. These are not part of the test suite, but we are gradually adapting those tests that are
useful and deleting the others.
Submitting code
The best way to get started with contributing code to PyNN is to fix a small bug (bugs marked “minor” in the bug
tracker) in your checkout of the code. Once you are happy with your changes, run the test suite again to check that
you have not introduced any new bugs. If this is your first contribution to the project, please add your name and
affiliation/employer to AUTHORS.
After committing the changes to your local repository:
$ git commit -m 'informative commit message'
first pull in any changes from the upstream repository:
$ git pull upstream master
then push to your own account on GitHub:
$ git push
Now, via the GitHub web interface, open a pull request.
Documentation
PyNN documentation is generated using Sphinx.
17.1. Developers’ guide
135
PyNN Documentation, Release 0.8.2
To build the documentation in HTML format, run:
$ make html
in the doc subdirectory of the source tree. Many of the files contain examples of interactive Python sessions. The
validity of this code can be tested by running:
$ make doctest
PyNN documentation is hosted at http://neuralensemble.org/docs/PyNN
Making a release
To make a release of PyNN requires you to have permissions to upload PyNN packages to the Python Package Index
and the INCF Software Center, and to upload documentation to the neuralensemble.org server. If you are interested in
becoming release manager for PyNN, please contact us via the mailing list.
When you think a release is ready, run through the following checklist one last time:
• do all the tests pass? This means running nosetests in test/unittests and test/system and running make doctest in doc. You should do this on at least two Linux systems – one a very recent version
and one at least a year old, and on at least one version of Mac OS X. You should also do this with Python 2.6,
2.7 and 3.4 or 3.5.
• do all the example scripts generate the correct output? Run the run_all_examples.py script in
examples/tools and then visually check the .png files generated in examples/tools/Results.
Again, you should do this on at least two Linux systems and one Mac OS X system.
• does the documentation build without errors? You should then at least skim the generated HTML pages to check
for obvious problems.
• have you updated the version numbers in setup.py, pyNN/__init__.py, doc/conf.py and
doc/installation.txt?
• have you updated the changelog?
Once you’ve confirmed all the above, create a source package using:
$ python setup.py sdist
and check that it installs properly (you will find it in the dist subdirectory.
Now you should commit any changes, then tag with the release number as follows:
$ git tag x.y.z
where x.y.z is the release number.
http://neuralensemble.org/docs/PyNN/ by running:
You
should
now
upload
the
documentation
to
$ make zip
in the doc directory, and then unpacking the resulting archive on the NeuralEnsemble server.
If this is a development release (i.e. an alpha or beta), the final step is to upload the source package to the INCF
Software Center. Do not upload development releases to PyPI.
To upload a package to the INCF Software Center, log-in, and then go to the Contents tab. Click on “Add new...” then
“File”, then fill in the form and upload the source package.
If this is a final release, there are a few more steps:
• if it is a major release (i.e. an x.y.0 release), create a new bug-fix branch:
136
Chapter 17. Developers’ Guide
PyNN Documentation, Release 0.8.2
$ git branch x.y
• upload the source package to PyPI:
$ python setup.py sdist upload
• make an announcement on the mailing list
• if it is a major release, write a blog post about it with a focus on the new features and major changes
• go home, take a headache pill and lie down for a while in a darkened room (-;
17.1. Developers’ guide
137
PyNN Documentation, Release 0.8.2
138
Chapter 17. Developers’ Guide
CHAPTER 18
API reference
18.1 API reference
18.1.1 Populations, Views and Assemblies
Populations
Views (sub-populations)
Assemblies
18.1.2 Connectors
Base class
class Connector(safe=True, callback=None)
Base class for connectors.
All connector sub-classes have the following optional keyword arguments:
safe: if True, check that weights and delays have valid values. If False, this check is skipped.
callback: a function that will be called with the fractional progress of the connection routine. An example
would be progress_bar.set_level.
connect(projection)
get_parameters()
describe(template=’connector_default.txt’, engine=’default’)
Returns a human-readable description of the connection method.
The output may be customized by specifying a different template togther with an associated template
engine (see pyNN.descriptions).
If template is None, then a dictionary containing the template context will be returned.
Built-in connectors
class AllToAllConnector(allow_self_connections=True, safe=True, callback=None)
Connects all cells in the presynaptic population to all cells in the postsynaptic population.
139
PyNN Documentation, Release 0.8.2
Takes any of the standard Connector optional arguments and, in addition:
allow_self_connections: if the connector is used to connect a Population to itself, this flag determines whether a neuron is allowed to connect to itself, or only to other neurons in the Population.
class OneToOneConnector(safe=True, callback=None)
Where the pre- and postsynaptic populations have the same size, connect cell i in the presynaptic population to
cell i in the postsynaptic population for all i.
Takes any of the standard Connector optional arguments.
class FixedProbabilityConnector(p_connect, allow_self_connections=True, rng=None, safe=True,
callback=None)
For each pair of pre-post cells, the connection probability is constant.
Takes any of the standard Connector optional arguments and, in addition:
p_connect: a float between zero and one. Each potential connection is created with this probability.
allow_self_connections: if the connector is used to connect a Population to itself, this flag determines whether a neuron is allowed to connect to itself, or only to other neurons in the Population.
rng: an RNG instance used to evaluate whether connections exist
class FromListConnector(conn_list, column_names=None, safe=True, callback=None)
Make connections according to a list.
Arguments:
conn_list: a list of tuples, one tuple for each connection. Each tuple should contain: (pre_idx, post_idx,
p1, p2, ..., pn) where pre_idx is the index (i.e. order in the Population, not the ID) of the presynaptic
neuron, post_idx is the index of the postsynaptic neuron, and p1, p2, etc. are the synaptic parameters
(e.g. weight, delay, plasticity parameters).
column_names: the names of the parameters p1, p2, etc. If not provided, it is assumed the parameters are
‘weight’, ‘delay’ (for backwards compatibility).
safe: if True, check that weights and delays have valid values. If False, this check is skipped.
callback: if True, display a progress bar on the terminal.
class FromFileConnector(file, distributed=False, safe=True, callback=None)
Make connections according to a list read from a file.
Arguments:
file: either an open file object or the filename of a file containing a list of connections, in the format
required by FromListConnector.
distributed: if this is True, then each node will read connections from a file called filename.x, where x is
the MPI rank. This speeds up loading connections for distributed simulations.
safe: if True, check that weights and delays have valid values. If False, this check is skipped.
callback: if True, display a progress bar on the terminal.
class ArrayConnector(array, safe=True, callback=None)
Provide an explicit boolean connection matrix, with shape (m, n) where m is the size of the presynaptic population and n that of the postsynaptic population.
class FixedNumberPreConnector(n,
allow_self_connections=True,
with_replacement=False,
rng=None, safe=True, callback=None)
Each post-synaptic neuron is connected to exactly n pre-synaptic neurons chosen at random.
140
Chapter 18. API reference
PyNN Documentation, Release 0.8.2
The sampling behaviour is controlled by the with_replacement argument.
“With replacement” means that each pre-synaptic neuron is chosen from the entire population. There is always
therefore a possibility of multiple connections between a given pair of neurons.
“Without replacement” means that once a neuron has been selected, it cannot be selected again until the entire
population has been selected. This means that if n is less than the size of the pre-synaptic population, there
are no multiple connections. If n is greater than the size of the pre- synaptic population, all possible single
connections are made before starting to add duplicate connections.
Takes any of the standard Connector optional arguments and, in addition:
n: either a positive integer, or a RandomDistribution that produces positive integers. If n is a RandomDistribution, then the number of pre-synaptic neurons is drawn from this distribution for
each post-synaptic neuron.
with_replacement: if True, the selection of neurons to connect is made from the entire population.
If False, once a neuron is selected it cannot be selected again until the entire population has been
connected.
allow_self_connections: if the connector is used to connect a Population to itself, this flag determines whether a neuron is allowed to connect to itself, or only to other neurons in the Population.
rng: an RNG instance used to evaluate which potential connections are created.
class FixedNumberPostConnector(n,
allow_self_connections=True,
with_replacement=False,
rng=None, safe=True, callback=None)
Each pre-synaptic neuron is connected to exactly n post-synaptic neurons chosen at random.
The sampling behaviour is controlled by the with_replacement argument.
“With replacement” means that each post-synaptic neuron is chosen from the entire population. There is always
therefore a possibility of multiple connections between a given pair of neurons.
“Without replacement” means that once a neuron has been selected, it cannot be selected again until the entire
population has been selected. This means that if n is less than the size of the post-synaptic population, there
are no multiple connections. If n is greater than the size of the post- synaptic population, all possible single
connections are made before starting to add duplicate connections.
Takes any of the standard Connector optional arguments and, in addition:
n: either a positive integer, or a RandomDistribution that produces positive integers. If n is a RandomDistribution, then the number of post-synaptic neurons is drawn from this distribution for
each pre-synaptic neuron.
with_replacement: if True, the selection of neurons to connect is made from the entire population.
If False, once a neuron is selected it cannot be selected again until the entire population has been
connected.
allow_self_connections: if the connector is used to connect a Population to itself, this flag determines whether a neuron is allowed to connect to itself, or only to other neurons in the Population.
rng: an RNG instance used to evaluate which potential connections are created.
class FixedTotalNumberConnector(n,
allow_self_connections=True,
with_replacement=True,
rng=None, safe=True, callback=None)
class DistanceDependentProbabilityConnector(d_expression,
allow_self_connections=True,
rng=None, safe=True, callback=None)
For each pair of pre-post cells, the connection probability depends on distance.
Takes any of the standard Connector optional arguments and, in addition:
18.1. API reference
141
PyNN Documentation, Release 0.8.2
d_expression: the right-hand side of a valid Python expression for probability, involving ‘d’, e.g.
“exp(-abs(d))”, or “d<3”
allow_self_connections: if the connector is used to connect a Population to itself, this flag determines whether a neuron is allowed to connect to itself, or only to other neurons in the Population.
rng: an RNG instance used to evaluate whether connections exist
class IndexBasedProbabilityConnector(index_expression,
allow_self_connections=True,
rng=None, safe=True, callback=None)
For each pair of pre-post cells, the connection probability depends on an arbitrary functions that takes the indices
of the pre and post populations.
Takes any of the standard Connector optional arguments and, in addition:
index_expression: a function that takes the two cell indices as inputs and calculates the probability
matrix from it.
allow_self_connections: if the connector is used to connect a Population to itself, this flag determines whether a neuron is allowed to connect to itself, or only to other neurons in the Population.
rng: an RNG instance used to evaluate whether connections exist
class DisplacementDependentProbabilityConnector(disp_function,
low_self_connections=True,
safe=True, callback=None)
alrng=None,
class SmallWorldConnector(degree, rewiring, allow_self_connections=True, n_connections=None,
rng=None, safe=True, callback=None)
Connect cells so as to create a small-world network.
Takes any of the standard Connector optional arguments and, in addition:
degree: the region length where nodes will be connected locally.
rewiring: the probability of rewiring each edge.
allow_self_connections: if the connector is used to connect a Population to itself, this flag determines whether a neuron is allowed to connect to itself, or only to other neurons in the Population.
n_connections: if specified, the number of efferent synaptic connections per neuron.
rng: an RNG instance used to evaluate which connections are created.
class CSAConnector(cset, safe=True, callback=None)
Use the Connection Set Algebra (Djurfeldt, 2012) to connect cells.
Takes any of the standard Connector optional arguments and, in addition:
cset: a connection set object.
class CloneConnector(reference_projection, safe=True, callback=None)
Connects cells with the same connectivity pattern as a previous projection.
18.1.3 Projections
18.1.4 Neuron models
PyNN provides a library of neuron models that have been standardized so as to give the same results (within certain
limits of numerical accuracy) on different backends. Each model is represented by a “cell type” class.
142
Chapter 18. API reference
PyNN Documentation, Release 0.8.2
It is also possible to use simulator-specific neuron models, which we call “native” cell types. Of course, such models
will only work with one specific backend simulator.
Note: the development version has some support for specifying cell types using the NineML and NeuroML formats,
but this is not yet available in the current release.
Standard cell types
• Plain integrate-and-fire models:
– IF_curr_exp
– IF_curr_alpha
– IF_cond_exp
– IF_cond_alpha
• Integrate-and-fire with adaptation:
– IF_cond_exp_gsfa_grr
– EIF_cond_alpha_isfa_ista
– EIF_cond_exp_isfa_ista
– Izhikevich
• Hodgkin-Huxley model
– HH_cond_exp
• Spike sources (input neurons)
– SpikeSourcePoisson
– SpikeSourceArray
– SpikeSourceInhGamma
Base class
All standard cell types inherit from the following base class, and have the same methods, as listed below.
class StandardCellType(**parameters)
Bases: pyNN.standardmodels.StandardModelType, pyNN.models.BaseCellType
Base class for standardized cell model classes.
get_schema()
Returns the model schema: i.e. a mapping of parameter names to allowed parameter types.
get_parameter_names()
Return the names of the parameters of this model.
get_native_names(*names)
Return a list of native parameter names for a given model.
has_parameter(name)
Does this model have a parameter with the given name?
18.1. API reference
143
PyNN Documentation, Release 0.8.2
translate(parameters)
Translate standardized model parameters to simulator-specific parameters.
reverse_translate(native_parameters)
Translate simulator-specific model parameters to standardized parameters.
simple_parameters()
Return a list of parameters for which there is a one-to-one correspondance between standard and native
parameter values.
scaled_parameters()
Return a list of parameters for which there is a unit change between standard and native parameter values.
computed_parameters()
Return a list of parameters whose values must be computed from more than one other parameter.
describe(template=’modeltype_default.txt’, engine=’default’)
Returns a human-readable description of the cell or synapse type.
The output may be customized by specifying a different template togther with an associated template
engine (see pyNN.descriptions).
If template is None, then a dictionary containing the template context will be returned.
Simple integrate-and-fire neurons
class IF_cond_exp(**parameters)
Bases: pyNN.standardmodels.StandardCellType
Leaky integrate and fire model with fixed threshold and exponentially-decaying post-synaptic conductance.
injectable = True
conductance_based = True
default_parameters = {‘tau_refrac’: 0.1, ‘cm’: 1.0, ‘tau_syn_E’: 5.0, ‘v_rest’: -65.0, ‘tau_syn_I’: 5.0, ‘tau_m’: 20.0, ‘
recordable = [’spikes’, ‘v’, ‘gsyn_exc’, ‘gsyn_inh’]
default_initial_values = {‘gsyn_exc’: 0.0, ‘gsyn_inh’: 0.0, ‘v’: -65.0}
units = {‘gsyn_exc’: ‘uS’, ‘gsyn_inh’: ‘uS’, ‘v’: ‘mV’}
class IF_cond_alpha(**parameters)
Bases: pyNN.standardmodels.StandardCellType
Leaky integrate and fire model with fixed threshold and alpha-function- shaped post-synaptic conductance.
injectable = True
conductance_based = True
default_parameters = {‘tau_refrac’: 0.1, ‘cm’: 1.0, ‘tau_syn_E’: 0.3, ‘v_rest’: -65.0, ‘tau_syn_I’: 0.5, ‘tau_m’: 20.0, ‘
recordable = [’spikes’, ‘v’, ‘gsyn_exc’, ‘gsyn_inh’]
default_initial_values = {‘gsyn_exc’: 0.0, ‘gsyn_inh’: 0.0, ‘v’: -65.0}
units = {‘gsyn_exc’: ‘uS’, ‘gsyn_inh’: ‘uS’, ‘v’: ‘mV’}
class IF_curr_exp(**parameters)
Bases: pyNN.standardmodels.StandardCellType
Leaky integrate and fire model with fixed threshold and decaying-exponential post-synaptic current. (Separate
synaptic currents for excitatory and inhibitory synapses.
144
Chapter 18. API reference
PyNN Documentation, Release 0.8.2
injectable = True
conductance_based = False
default_parameters = {‘tau_refrac’: 0.1, ‘v_thresh’: -50.0, ‘tau_m’: 20.0, ‘tau_syn_E’: 5.0, ‘v_rest’: -65.0, ‘cm’: 1.0,
recordable = [’spikes’, ‘v’]
conductance_based = False
default_initial_values = {‘isyn_inh’: 0.0, ‘isyn_exc’: 0.0, ‘v’: -65.0}
units = {‘isyn_inh’: ‘nA’, ‘isyn_exc’: ‘nA’, ‘v’: ‘mV’}
class IF_curr_alpha(**parameters)
Bases: pyNN.standardmodels.StandardCellType
Leaky integrate and fire model with fixed threshold and alpha-function- shaped post-synaptic current.
injectable = True
conductance_based = False
default_parameters = {‘tau_refrac’: 0.1, ‘v_thresh’: -50.0, ‘tau_m’: 20.0, ‘tau_syn_E’: 0.5, ‘v_rest’: -65.0, ‘cm’: 1.0,
recordable = [’spikes’, ‘v’]
conductance_based = False
default_initial_values = {‘isyn_inh’: 0.0, ‘isyn_exc’: 0.0, ‘v’: -65.0}
units = {‘isyn_inh’: ‘nA’, ‘isyn_exc’: ‘nA’, ‘v’: ‘mV’}
Integrate-and-fire neurons with adaptation
class Izhikevich(**parameters)
Bases: pyNN.standardmodels.StandardCellType
Izhikevich spiking model with a quadratic non-linearity according to:
5.Izhikevich (2003), IEEE transactions on neural networks, 14(6)
dv/dt = 0.04*v^2 + 5*v + 140 - u + I du/dt = a*(b*v - u)
Synapses are modeled as Dirac delta currents (voltage step), as in the original model
NOTE: name should probably be changed to match standard nomenclature, e.g. QIF_cond_delta_etc_etc, although keeping “Izhikevich” as an alias would be good
injectable = True
conductance_based = False
default_parameters = {‘a’: 0.02, ‘c’: -65.0, ‘b’: 0.2, ‘d’: 2.0, ‘i_offset’: 0.0}
recordable = [’spikes’, ‘v’, ‘u’]
conductance_based = False
voltage_based_synapses = True
default_initial_values = {‘u’: -14.0, ‘v’: -70.0}
units = {‘u’: ‘mV/ms’, ‘v’: ‘mV’}
18.1. API reference
145
PyNN Documentation, Release 0.8.2
class EIF_cond_exp_isfa_ista(**parameters)
Bases: pyNN.standardmodels.StandardCellType
Exponential integrate and fire neuron with spike triggered and sub-threshold adaptation currents (isfa, ista reps.)
according to:
Brette R and Gerstner W (2005) Adaptive Exponential Integrate-and-Fire Model as an Effective Description of
Neuronal Activity. J Neurophysiol 94:3637-3642
See also: IF_cond_exp_gsfa_grr, EIF_cond_alpha_isfa_ista
injectable = True
conductance_based = True
default_parameters = {‘tau_refrac’: 0.1, ‘a’: 4.0, ‘tau_m’: 9.3667, ‘e_rev_E’: 0.0, ‘i_offset’: 0.0, ‘cm’: 0.281, ‘delta_T
recordable = [’spikes’, ‘v’, ‘w’, ‘gsyn_exc’, ‘gsyn_inh’]
default_initial_values = {‘gsyn_exc’: 0.0, ‘gsyn_inh’: 0.0, ‘w’: 0.0, ‘v’: -70.6}
units = {‘gsyn_exc’: ‘uS’, ‘gsyn_inh’: ‘uS’, ‘w’: ‘nA’, ‘v’: ‘mV’}
class EIF_cond_alpha_isfa_ista(**parameters)
Bases: pyNN.standardmodels.StandardCellType
Exponential integrate and fire neuron with spike triggered and sub-threshold adaptation currents (isfa, ista reps.)
according to:
Brette R and Gerstner W (2005) Adaptive Exponential Integrate-and-Fire Model as an Effective Description of
Neuronal Activity. J Neurophysiol 94:3637-3642
See also: IF_cond_exp_gsfa_grr, EIF_cond_exp_isfa_ista
injectable = True
conductance_based = True
default_parameters = {‘tau_refrac’: 0.1, ‘a’: 4.0, ‘tau_m’: 9.3667, ‘e_rev_E’: 0.0, ‘i_offset’: 0.0, ‘cm’: 0.281, ‘delta_T
recordable = [’spikes’, ‘v’, ‘w’, ‘gsyn_exc’, ‘gsyn_inh’]
default_initial_values = {‘gsyn_exc’: 0.0, ‘gsyn_inh’: 0.0, ‘w’: 0.0, ‘v’: -70.6}
units = {‘gsyn_exc’: ‘uS’, ‘gsyn_inh’: ‘uS’, ‘w’: ‘nA’, ‘v’: ‘mV’}
class IF_cond_exp_gsfa_grr(**parameters)
Bases: pyNN.standardmodels.StandardCellType
Linear leaky integrate and fire model with fixed threshold, decaying-exponential post-synaptic conductance,
conductance based spike-frequency adaptation, and a conductance-based relative refractory mechanism.
See: Muller et al (2007) Spike-frequency adapting neural ensembles: Beyond mean-adaptation and renewal
theories. Neural Computation 19: 2958-3010.
See also: EIF_cond_alpha_isfa_ista
injectable = True
conductance_based = True
default_parameters = {‘tau_refrac’: 0.1, ‘e_rev_rr’: -75.0, ‘tau_m’: 20.0, ‘e_rev_E’: 0.0, ‘i_offset’: 0.0, ‘cm’: 1.0, ‘e_
recordable = [’spikes’, ‘v’, ‘g_r’, ‘g_s’, ‘gsyn_exc’, ‘gsyn_inh’]
default_initial_values = {‘gsyn_exc’: 0.0, ‘gsyn_inh’: 0.0, ‘g_r’: 0.0, ‘g_s’: 0.0, ‘v’: -65.0}
units = {‘gsyn_exc’: ‘uS’, ‘gsyn_inh’: ‘uS’, ‘g_r’: ‘nS’, ‘g_s’: ‘nS’, ‘v’: ‘mV’}
146
Chapter 18. API reference
PyNN Documentation, Release 0.8.2
Spike sources
class SpikeSourcePoisson(**parameters)
Bases: pyNN.standardmodels.StandardCellType
Spike source, generating spikes according to a Poisson process.
injectable = False
conductance_based = True
default_parameters = {‘duration’: 10000000000.0, ‘start’: 0.0, ‘rate’: 1.0}
recordable = [’spikes’]
injectable = False
receptor_types = ()
class SpikeSourceArray(**parameters)
Bases: pyNN.standardmodels.StandardCellType
Spike source generating spikes at the times given in the spike_times array.
injectable = False
conductance_based = True
default_parameters = {‘spike_times’: Sequence([])}
recordable = [’spikes’]
injectable = False
receptor_types = ()
class SpikeSourceInhGamma(**parameters)
Bases: pyNN.standardmodels.StandardCellType
Spike source, generating realizations of an inhomogeneous gamma process, employing the thinning method.
See: Muller et al (2007) Spike-frequency adapting neural ensembles: Beyond mean-adaptation and renewal
theories. Neural Computation 19: 2958-3010.
injectable = False
conductance_based = True
default_parameters = {‘a’: Sequence([ 1.]), ‘tbins’: Sequence([ 0.]), ‘duration’: 10000000000.0, ‘b’: Sequence([ 1.]), ‘
recordable = [’spikes’]
injectable = False
receptor_types = ()
Native cell types
Todo
WRITE THIS PART
18.1. API reference
147
PyNN Documentation, Release 0.8.2
Utility functions
18.1.5 Synapse models
The synaptic connection between two neurons is represented as a “synapse type” class. Note that synaptic attributes
that belong solely to the post-synaptic neuron, such as the decay of the post-synaptic conductance, are part of the cell
type model. The “synapse type” models the synaptic delay, the synaptic weight, and any dynamic behaviour of the
synaptic weight, i.e. synaptic plasticity.
As for cell types, PyNN has a library of “standard” synapse types that should give the same behaviour on different
simulators, and also supports the use of “native” synapse types, limited to a single simulator.
Standard synapse types
Base class
All standard cell types inherit from the following base class, and have the same methods, as listed below.
class StandardSynapseType(**parameters)
Bases: pyNN.standardmodels.StandardModelType, pyNN.models.BaseSynapseType
parameter_checks = {‘delay’: <function check_delays at 0x7fa3c1bf7398>, ‘weight’: <function check_weights at 0x7fa
get_schema()
Returns the model schema: i.e. a mapping of parameter names to allowed parameter types.
computed_parameters()
Return a list of parameters whose values must be computed from more than one other parameter.
connection_type = None
default_initial_values = {}
default_parameters = {}
describe(template=’modeltype_default.txt’, engine=’default’)
Returns a human-readable description of the cell or synapse type.
The output may be customized by specifying a different template togther with an associated template
engine (see pyNN.descriptions).
If template is None, then a dictionary containing the template context will be returned.
extra_parameters = {}
get_native_names(*names)
Return a list of native parameter names for a given model.
get_parameter_names()
Return the names of the parameters of this model.
has_parameter(name)
Does this model have a parameter with the given name?
has_presynaptic_components = False
native_parameters
A ParameterSpace containing parameter names and values translated from the standard PyNN names
and units to simulator-specific (“native”) names and units.
reverse_translate(native_parameters)
Translate simulator-specific model parameters to standardized parameters.
148
Chapter 18. API reference
PyNN Documentation, Release 0.8.2
scaled_parameters()
Return a list of parameters for which there is a unit change between standard and native parameter values.
simple_parameters()
Return a list of parameters for which there is a one-to-one correspondance between standard and native
parameter values.
translate(parameters)
Translate standardized model parameters to simulator-specific parameters.
translations = {}
Static/fixed synapses
class StaticSynapse(**parameters)
Bases: pyNN.standardmodels.StandardSynapseType
Synaptic connection with fixed weight and delay.
default_parameters = {‘delay’: None, ‘weight’: 0.0}
Short-term plasticity mechanisms
class TsodyksMarkramSynapse(**parameters)
Bases: pyNN.standardmodels.StandardSynapseType
Synapse exhibiting facilitation and depression, implemented using the model of Tsodyks, Markram et al.:
Tsodyks, Uziel and Markram (2000) Synchrony Generation in Recurrent Networks with Frequency-Dependent
Synapses. Journal of Neuroscience 20:RC50
Note that the time constant of the post-synaptic current is set in the neuron model, not here.
Arguments:
U: use parameter.
tau_rec: depression time constant (ms).
tau_facil: facilitation time constant (ms).
default_parameters = {‘delay’: None, ‘tau_facil’: 0.0, ‘U’: 0.5, ‘tau_rec’: 100.0, ‘weight’: 0.0}
Long-term plasticity mechanisms
class STDPMechanism(timing_dependence=None, weight_dependence=None, voltage_dependence=None,
dendritic_delay_fraction=1.0, weight=0.0, delay=None)
Bases: pyNN.standardmodels.StandardSynapseType
A specification for an STDP mechanism, combining a weight-dependence, a timing-dependence, and, optionally, a voltage-dependence of the synaptic change.
For point neurons, the synaptic delay d can be interpreted either as occurring purely in the pre-synaptic axon +
synaptic cleft, in which case the synaptic plasticity mechanism ‘sees’ the post-synaptic spike immediately and
the pre-synaptic spike after a delay d (dendritic_delay_fraction = 0) or as occurring purely in the post- synaptic
dendrite, in which case the pre-synaptic spike is seen immediately, and the post-synaptic spike after a delay d
(dendritic_delay_fraction = 1), or as having both pre- and post- synaptic components (dendritic_delay_fraction
between 0 and 1).
18.1. API reference
149
PyNN Documentation, Release 0.8.2
In a future version of the API, we will allow the different components of the synaptic delay to be specified
separately in milliseconds.
model
possible_models
A list of available synaptic plasticity models for the current configuration (weight dependence, timing
dependence, ...) in the current simulator.
get_parameter_names()
has_parameter(name)
Does this model have a parameter with the given name?
get_schema()
Returns the model schema: i.e. a mapping of parameter names to allowed parameter types.
parameter_space
native_parameters
A dictionary containing the combination of parameters from the different components of the STDP model.
describe(template=’stdpmechanism_default.txt’, engine=’default’)
Returns a human-readable description of the STDP mechanism.
The output may be customized by specifying a different template togther with an associated template
engine (see pyNN.descriptions).
If template is None, then a dictionary containing the template context will be returned.
Weight-dependence components
class STDPWeightDependence(**parameters)
Bases: pyNN.standardmodels.StandardModelType
Base class for models of STDP weight dependence.
describe(template=’modeltype_default.txt’, engine=’default’)
Returns a human-readable description of the cell or synapse type.
The output may be customized by specifying a different template togther with an associated template
engine (see pyNN.descriptions).
If template is None, then a dictionary containing the template context will be returned.
class AdditiveWeightDependence(w_min=0.0, w_max=1.0)
Bases: pyNN.standardmodels.STDPWeightDependence
The amplitude of the weight change is independent of the current weight. If the new weight would be less than
w_min it is set to w_min. If it would be greater than w_max it is set to w_max.
Arguments:
w_min: minimum synaptic weight, in the same units as the weight, i.e. µS or nA.
w_max: maximum synaptic weight.
default_parameters = {‘w_min’: 0.0, ‘w_max’: 1.0}
class MultiplicativeWeightDependence(w_min=0.0, w_max=1.0)
Bases: pyNN.standardmodels.STDPWeightDependence
The amplitude of the weight change depends on the current weight. For depression, ∆w w - w_min For
potentiation, ∆w w_max - w
Arguments:
150
Chapter 18. API reference
PyNN Documentation, Release 0.8.2
w_min: minimum synaptic weight, in the same units as the weight, i.e. µS or nA.
w_max: maximum synaptic weight.
default_parameters = {‘w_min’: 0.0, ‘w_max’: 1.0}
class AdditivePotentiationMultiplicativeDepression(w_min=0.0, w_max=1.0)
Bases: pyNN.standardmodels.STDPWeightDependence
The amplitude of the weight change depends on the current weight for depression (∆w w) and is fixed for
potentiation.
Arguments:
w_min: minimum synaptic weight, in the same units as the weight, i.e. µS or nA.
w_max: maximum synaptic weight.
default_parameters = {‘w_min’: 0.0, ‘w_max’: 1.0}
class GutigWeightDependence(w_min=0.0, w_max=1.0, mu_plus=0.5, mu_minus=0.5)
Bases: pyNN.standardmodels.STDPWeightDependence
The amplitude of the weight change depends on (w_max-w)^mu_plus for potentiation and (ww_min)^mu_minus for depression.
Arguments:
w_min: minimum synaptic weight, in the same units as the weight, i.e. µS or nA.
w_max: maximum synaptic weight.
mu_plus: see above
mu_minus: see above
default_parameters = {‘mu_plus’: 0.5, ‘w_min’: 0.0, ‘w_max’: 1.0, ‘mu_minus’: 0.5}
Timing-dependence components
class STDPTimingDependence(**parameters)
Bases: pyNN.standardmodels.StandardModelType
Base class for models of STDP timing dependence (triplets, etc)
class SpikePairRule(tau_plus=20.0, tau_minus=20.0, A_plus=0.01, A_minus=0.01)
Bases: pyNN.standardmodels.STDPTimingDependence
The amplitude of the weight change depends only on the relative timing of spike pairs, not triplets, etc. All
possible spike pairs are taken into account (cf Song and Abbott).
Arguments:
tau_plus: time constant of the positive part of the STDP curve, in milliseconds.
tau_minus time constant of the negative part of the STDP curve, in milliseconds.
A_plus: amplitude of the positive part of the STDP curve.
A_minus: amplitude of the negative part of the STDP curve.
describe(template=’modeltype_default.txt’, engine=’default’)
Returns a human-readable description of the cell or synapse type.
The output may be customized by specifying a different template togther with an associated template
engine (see pyNN.descriptions).
If template is None, then a dictionary containing the template context will be returned.
18.1. API reference
151
PyNN Documentation, Release 0.8.2
default_parameters = {‘tau_minus’: 20.0, ‘A_minus’: 0.01, ‘tau_plus’: 20.0, ‘A_plus’: 0.01}
Native plasticity models
NineML plasticity models
18.1.6 Current sources
18.1.7 Simulation control
18.1.8 Random numbers
class RandomDistribution(distribution, parameters_pos=None, rng=None, **parameters_named)
Class which defines a next(n) method which returns an array of n random numbers from a given distribution.
Arguments:
distribution: the name of a random number distribution.
parameters_pos: parameters of the distribution, provided as a tuple. For the correct ordering, see random.available_distributions.
rng: if present, should be a NumpyRNG, GSLRNG or NativeRNG object.
parameters_named: parameters of the distribution, provided as keyword arguments.
Parameters may be provided either through parameters_pos or through parameters_named, but not both. All
parameters must be provided, there are no default values. Parameter names are, in general, as used in Wikipedia.
Examples:
>>>
>>>
>>>
>>>
rd = RandomDistribution('uniform', (-70, -50))
rd = RandomDistribution('normal', mu=0.5, sigma=0.1)
rng = NumpyRNG(seed=8658764)
rd = RandomDistribution('gamma', k=2.0, theta=5.0, rng=rng)
Available distributions:
Name
binomial
gamma
exponential
lognormal
normal
normal_clipped
normal_clipped_to_boundary
poisson
uniform
uniform_int
vonmises
Parameters
n, p
k, theta
beta
mu, sigma
mu, sigma
mu, sigma, low,
high
mu, sigma, low,
high
lambda_
Comments
Values outside (low, high) are redrawn
Values below/above low/high are set to low/high
Trailing underscore since lambda is a Python
keyword
low, high
low, high
mu, kappa
next(n=None, mask_local=None)
Return n random numbers from the distribution.
152
Chapter 18. API reference
PyNN Documentation, Release 0.8.2
class NumpyRNG(seed=None, parallel_safe=True)
Bases: pyNN.random.WrappedRNG
Wrapper for the numpy.random.RandomState class (Mersenne Twister PRNG).
translations = {‘lognormal’: (‘lognormal’, {‘mu’: ‘mean’, ‘sigma’: ‘sigma’}), ‘normal_clipped’: (‘normal_clipped’, {‘
normal_clipped(mu=0.0, sigma=1.0, low=-inf, high=inf, size=None)
normal_clipped_to_boundary(mu=0.0, sigma=1.0, low=-inf, high=inf, size=None)
describe()
next(n=None, distribution=None, parameters=None, mask_local=None)
Return n random numbers from the specified distribution.
If:
• n is None, return a float,
• n >= 1, return a Numpy array,
• n < 0, raise an Exception,
• n is 0, return an empty array.
If called with distribution=None, returns uniformly distributed floats in the range [0, 1)
class GSLRNG(seed=None, type=’mt19937’, parallel_safe=True)
Bases: pyNN.random.WrappedRNG
Wrapper for the GSL random number generators.
translations = {‘uniform_int’: (‘uniform_int’, {‘high’: ‘high’, ‘low’: ‘low’}), ‘lognormal’: (‘lognormal’, {‘mu’: ‘zeta’,
uniform_int(low, high, size=None)
gamma(k, theta, size=None)
normal(mu=0.0, sigma=1.0, size=None)
normal_clipped(mu=0.0, sigma=1.0, low=-inf, high=inf, size=None)
describe()
next(n=None, distribution=None, parameters=None, mask_local=None)
Return n random numbers from the specified distribution.
If:
• n is None, return a float,
• n >= 1, return a Numpy array,
• n < 0, raise an Exception,
• n is 0, return an empty array.
If called with distribution=None, returns uniformly distributed floats in the range [0, 1)
class NativeRNG(seed=None)
Bases: pyNN.random.AbstractRNG
Signals that the simulator’s own native RNG should be used. Each simulator module should implement a class
of the same name which inherits from this and which sets the seed appropriately.
next(n=None, distribution=None, parameters=None, mask_local=None)
Return n random numbers from the specified distribution.
18.1. API reference
153
PyNN Documentation, Release 0.8.2
If:
• n is None, return a float,
• n >= 1, return a Numpy array,
• n < 0, raise an Exception,
• n is 0, return an empty array.
If called with distribution=None, returns uniformly distributed floats in the range [0, 1)
Adapting a different random number generator to work with PyNN
Todo
write this
18.1.9 Parameter handling
Note: these classes are not part of the PyNN API. They should not be used in PyNN scripts, they are intended for
implementing backends. You are not required to use them when implementing your own backend, however, as long as
your backend conforms to the API.
The main abstractions in PyNN are the population of neurons, and the set of connections (a ‘projection’) between two
populations. Setting the parameters of individual neuron and synapse models, therefore, mainly takes place at the level
of populations and projections.
Note: it is also possible to set the parameters of neurons and synapses individually, but this is generally less efficient.
Any model parameter in PyNN can be expressed as:
• a single value - all neurons in a population or synapses in a projection get the same value
• a RandomDistribution object - each element gets a value drawn from the random distribution
• a list/array of values of the same size as the population/projection
• a mapping function, where a mapping function accepts a either a single argument i (for a population) or two
arguments (i, j) (for a projection) and returns a single value.
A “single value” is usually a single number, but for some parameters (e.g. for spike times) it may be a list/array of
numbers.
To handle all these possibilities in a uniform way, and at the same time allow for efficient parallelization, in the
‘common’ implementation of the PyNN API all parameter values are converted into LazyArray objects, and the set
of parameters for a model is contained in a dict-like object, ParameterSpace.
The LazyArray class
LazyArray is a PyNN-specific sub-class of a more general class, larray, and most of its functionality comes
from the parent class. Full documentation for larray is available in the lazyarray package, but we give here a quick
overview.
154
Chapter 18. API reference
PyNN Documentation, Release 0.8.2
LazyArray has three important features in the context of PyNN:
1. any operations on the array (potentially including array construction) are not performed immediately, but are
delayed until evaluation is specifically requested.
2. evaluation of only parts of the array is possible, which means that in a parallel simulation with MPI, all processes
have the same LazyArray for a parameter, but on a given process, only the part of the array which is needed
for the neurons/synapses that exist on that process need be evaluated.
3. single often all neurons in a population or synapses in a projection have the same value for a given parameter,
a LazyArray created from a single value evaluates to that value: the full array is never created unless this is
requested.
For example, suppose we have two parameters, tau_m, which is constant, and v_thresh which varies according to the
position of the neuron in the population.
>>> from pyNN.parameters import LazyArray
>>> tau_m = 2 * LazyArray(10.0, shape=(20,))
>>> v_thresh = -55 + LazyArray(lambda i: 0.1*i, shape=(20,))
If we evaluate tau_m we get a full, homogeneous array:
>>> tau_m.evaluate()
array([ 20., 20., 20.,
20., 20., 20.,
20.,
20.,
20.,
20.,
20.,
20.,
20.,
20.,
20.,
20.,
20., 20.,
20.])
20.,
but we could also have asked just for the single number, in which case the full array would never be created:
>>> tau_m.evaluate(simplify=True)
20.0
Similarly, we can evaluate v_thresh to get a normal NumPy array:
>>> v_thresh.evaluate()
array([-55. , -54.9, -54.8, -54.7, -54.6, -54.5, -54.4, -54.3, -54.2,
-54.1, -54. , -53.9, -53.8, -53.7, -53.6, -53.5, -53.4, -53.3,
-53.2, -53.1])
but we can also take, for example, only every fifth value, in which case the operation “add -55” only gets performed
for those elements.
>>> v_thresh[::5]
array([-55. , -54.5, -54. , -53.5])
In this example the operation is very fast, but with slower operations (e.g. distance calculations) and large arrays, the
time savings can be considerable (see lazyarray performance).
In summary, by using LazyArray, we can pass parameters around in an optimised way without having to worry
about exactly what form the parameter value takes, hence avoiding a lot of logic at multiple points in the code.
Reference
class LazyArray(value, shape=None, dtype=None)
Bases: lazyarray.larray
Optimises storage of arrays in various ways:
• stores only a single value if all the values in the array are the same
18.1. API reference
155
PyNN Documentation, Release 0.8.2
• if the array is created from a RandomDistribution or a function f(i,j), then elements are only
evaluated when they are accessed. Any operations performed on the array are also queued up to be
executed on access.
The main intention of the latter is to save memory for very large arrays by accessing them one row or column at
a time: the entire array need never be in memory.
Arguments:
value: may be an int, long, float, bool, NumPy array, iterator, generator or a function, f(i) or f(i,j), depending on the dimensions of the array. f(i,j) should return a single number when i and j are integers, and a
1D array when either i or j or both is a NumPy array (in the latter case the two arrays must have equal
lengths).
shape: a tuple giving the shape of the array, or None
dtype: the NumPy dtype.
__getitem__(*args, **kwargs)
Return one or more items from the array, as for NumPy arrays.
addr may be a single integer, a slice, a NumPy boolean array or a NumPy integer array.
by_column(mask=None)
Iterate over the columns of the array. Columns will be yielded either as a 1D array or as a single value (for
a flat array).
mask: either None or a boolean array indicating which columns should be included.
apply(f )
Add the function f(x) to the list of the operations to be performed, where x will be a scalar or a numpy
array.
>>> m = larray(4, shape=(2,2))
>>> m.apply(numpy.sqrt)
>>> m.evaluate()
array([[ 2., 2.],
[ 2., 2.]])
check_bounds(*args, **kwargs)
Check whether the given address is within the array bounds.
evaluate(*args, **kwargs)
Return the lazy array as a real NumPy array.
If the array is homogeneous and simplify is True, return a single numerical value.
is_homogeneous
True if all the elements of the array are the same.
ncols
Size of the second dimension (if it exists) of the array.
nrows
Size of the first dimension of the array.
shape
Shape of the array
size
Total number of elements in the array.
156
Chapter 18. API reference
PyNN Documentation, Release 0.8.2
The ParameterSpace class
ParameterSpace is a dict-like class that contains LazyArray objects.
In addition to the usual dict methods, it has several methods that allow operations on all the lazy arrays within it at
once. For example:
>>> from pyNN.parameters import ParameterSpace
>>> ps = ParameterSpace({'a': [2, 3, 5, 8], 'b': 7, 'c': lambda i: 3*i+2}, shape=(4,))
>>> ps['c']
<larray: base_value=<function <lambda> at ...> shape=(4,) dtype=None, operations=[]>
>>> ps.evaluate()
>>> ps['c']
array([ 2, 5, 8, 11])
the evaluate() method also accepts a mask, in order to evaluate only part of the lazy arrays:
>>> ps = ParameterSpace({'a': [2, 3, 5, 8, 13], 'b': 7, 'c': lambda i: 3*i+2}, shape=(5,))
>>> ps.evaluate(mask=[1, 3, 4])
>>> ps.as_dict()
{'a': array([ 3, 8, 13]), 'c': array([ 5, 11, 14]), 'b': array([7, 7, 7])}
An example with two-dimensional arrays:
>>> ps2d = ParameterSpace({'a': [[2, 3, 5, 8, 13], [21, 34, 55, 89, 144]],
...
'b': 7,
...
'c': lambda i, j: 3*i-2*j}, shape=(2, 5))
>>> ps2d.evaluate(mask=(slice(None), [1, 3, 4]))
>>> print(ps2d['a'])
[[ 3
8 13]
[ 34 89 144]]
>>> print(ps2d['c'])
[[-2 -6 -8]
[ 1 -3 -5]]
There are also several methods to allow iterating over the parameter space in different ways. A ParameterSpace
can be viewed as both a dict contaning arrays and as an array of dicts. Iterating over a parameter space gives the
latter view:
>>> for D in ps:
...
print(D)
...
{'a': 3, 'c': 5, 'b': 7}
{'a': 8, 'c': 11, 'b': 7}
{'a': 13, 'c': 14, 'b': 7}
unlike for a dict, where iterating over it gives the keys. items() works as for a normal dict:
>>>
...
a =
c =
b =
for key, value in ps.items():
print(key, "=", value)
[ 3 8 13]
[ 5 11 14]
[7 7 7]
Reference
class ParameterSpace(parameters, schema=None, shape=None, component=None)
Representation of one or more points in a parameter space.
18.1. API reference
157
PyNN Documentation, Release 0.8.2
i.e. represents one or more parameter sets, where each parameter set has the same parameter names and types
but the parameters may have different values.
Arguments:
parameters: a dict containing values of any type that may be used to construct a lazy array, i.e. int, float,
NumPy array, RandomDistribution, function that accepts a single argument.
schema: a dict whose keys are the expected parameter names and whose values are the expected parameter
types
component: optional - class for which the parameters are destined. Used in error messages.
shape: the shape of the lazy arrays that will be constructed.
__getitem__(name)
x.__getitem__(y) <==> x[y]
__iter__()
Return an array-element-wise iterator over the parameter space.
Each item in the iterator is a dict, containing the same keys as the ParameterSpace. For the ith dict
returned by the iterator, each value is the ith element of the corresponding lazy array in the parameter
space.
Example:
>>> ps = ParameterSpace({'a': [2, 3, 5, 8], 'b': 7, 'c': lambda i: 3*i+2}, shape=(4,))
>>> ps.evaluate()
>>> for D in ps:
...
print(D)
...
{'a': 2, 'c': 2, 'b': 7}
{'a': 3, 'c': 5, 'b': 7}
{'a': 5, 'c': 8, 'b': 7}
{'a': 8, 'c': 11, 'b': 7}
shape
Size of the lazy arrays contained within the parameter space
keys() → list of PS’s keys.
items() → an iterator over the (key, value) items of PS.
Note that the values will all be LazyArray objects.
update(**parameters)
Update the contents of the parameter space according to the (key, value) pairs in **parameters. All
values will be turned into lazy arrays.
If the ParameterSpace has a schema, the keys and the data types of the values will be checked against
the schema.
pop(name, d=None)
Remove the given parameter from the parameter set and from its schema, and return its value.
is_homogeneous
True if all of the lazy arrays within are homogeneous.
evaluate(mask=None, simplify=False)
Evaluate all lazy arrays contained in the parameter space, using the given mask.
as_dict()
Return a plain dict containing the same keys and values as the parameter space. The values must first have
been evaluated.
158
Chapter 18. API reference
PyNN Documentation, Release 0.8.2
columns()
For a 2D space, return a column-wise iterator over the parameter space.
parallel_safe
has_native_rngs
Return True if the parameter set contains any NativeRNGs
expand(new_shape, mask)
Increase the size of the ParameterSpace.
Existing array values are mapped to the indices given in mask. New array values are set to NaN.
The Sequence class
class Sequence(value)
Represents a sequence of numerical values.
The reason for defining this class rather than just using a NumPy array is to avoid the ambiguity of “is a given
array a single parameter value (e.g. a spike train for one cell) or an array of parameter values (e.g. one number
per cell)?”.
Arguments:
value: anything which can be converted to a NumPy array, or another Sequence object.
__mul__(val)
Return a new Sequence in which all values in the original Sequence have been multiplied by val.
If val is itself an array, return an array of Sequence objects, where sequence i is the original sequence
multiplied by element i of val.
__div__(val)
Return a new Sequence in which all values in the original Sequence have been divided by val.
If val is itself an array, return an array of Sequence objects, where sequence i is the original sequence
divided by element i of val.
max()
Return the maximum value from the sequence.
18.1.10 Spatial structure
Structure classes
Structure classes all inherit from the following base class, and inherit its methods:
class BaseStructure
Bases: object
get_parameters()
Return a dict containing the parameters of the Structure.
describe(template=’structure_default.txt’, engine=’default’)
Returns a human-readable description of the network structure.
The output may be customized by specifying a different template togther with an associated template
engine (see pyNN.descriptions).
If template is None, then a dictionary containing the template context will be returned.
18.1. API reference
159
PyNN Documentation, Release 0.8.2
generate_positions(n)
Calculate and return the positions of n neurons positioned according to this structure.
class Line(dx=1.0, x0=0.0, y=0.0, z=0.0)
Bases: pyNN.space.BaseStructure
Represents a structure with neurons distributed evenly on a straight line.
Arguments:
dx: distance between points in the line.
y, z,: y- and z-coordinates of all points in the line.
x0: x-coordinate of the first point in the line.
parameter_names = (‘dx’, ‘x0’, ‘y’, ‘z’)
generate_positions(n)
Calculate and return the positions of n neurons positioned according to this structure.
class Grid2D(aspect_ratio=1.0, dx=1.0, dy=1.0, x0=0.0, y0=0.0, z=0, fill_order=’sequential’, rng=None)
Bases: pyNN.space.BaseStructure
Represents a structure with neurons distributed on a 2D grid.
Arguments:
dx, dy: distances between points in the x, y directions.
x0, y0: coordinates of the starting corner of the grid.
z: the z-coordinate of all points in the grid.
aspect_ratio: ratio of the number of grid points per side (not the ratio of the side lengths, unless dx ==
dy)
fill_order: may be ‘sequential’ or ‘random’
parameter_names = (‘aspect_ratio’, ‘dx’, ‘dy’, ‘x0’, ‘y0’, ‘z’, ‘fill_order’)
calculate_size(n)
docstring goes here
generate_positions(n)
Calculate and return the positions of n neurons positioned according to this structure.
class Grid3D(aspect_ratioXY=1.0, aspect_ratioXZ=1.0, dx=1.0, dy=1.0, dz=1.0, x0=0.0, y0=0.0, z0=0,
fill_order=’sequential’, rng=None)
Bases: pyNN.space.BaseStructure
Represents a structure with neurons distributed on a 3D grid.
Arguments:
dx, dy, dz: distances between points in the x, y, z directions.
x0, y0. z0: coordinates of the starting corner of the grid.
aspect_ratioXY, aspect_ratioXZ: ratios of the number of grid points per side (not the ratio of the side
lengths, unless dx == dy == dz)
fill_order: may be ‘sequential’ or ‘random’.
If fill_order is ‘sequential’, the z-index will be filled first, then y then x, i.e. the first cell will be at (0,0,0) (given
default values for the other arguments), the second at (0,0,1), etc.
parameter_names = (‘aspect_ratios’, ‘dx’, ‘dy’, ‘dz’, ‘x0’, ‘y0’, ‘z0’, ‘fill_order’)
160
Chapter 18. API reference
PyNN Documentation, Release 0.8.2
calculate_size(n)
docstring goes here
generate_positions(n)
Calculate and return the positions of n neurons positioned according to this structure.
class RandomStructure(boundary, origin=(0.0, 0.0, 0.0), rng=None)
Bases: pyNN.space.BaseStructure
Represents a structure with neurons distributed randomly within a given volume.
Arguments: boundary - a subclass of Shape. origin - the coordinates (x,y,z) of the centre of the volume.
parameter_names = (‘boundary’, ‘origin’, ‘rng’)
generate_positions(n)
Calculate and return the positions of n neurons positioned according to this structure.
Shape classes
class Cuboid(width, height, depth)
Bases: pyNN.space.Shape
Represents a cuboidal volume within which neurons may be distributed.
Arguments:
height: extent in y direction
width: extent in x direction
depth: extent in z direction
sample(n, rng)
Return n points distributed randomly with uniform density within the cuboid.
class Sphere(radius)
Bases: pyNN.space.Shape
Represents a spherical volume within which neurons may be distributed.
sample(n, rng)
Return n points distributed randomly with uniform density within the sphere.
The Space class
class Space(axes=None, scale_factor=1.0, offset=0.0, periodic_boundaries=None)
Class representing a space within distances can be calculated. The space is Cartesian, may be 1-, 2- or 3dimensional, and may have periodic boundaries in any of the dimensions.
Arguments:
axes: if not supplied, then the 3D distance is calculated. If supplied, axes should be a string containing
the axes to be used, e.g. ‘x’, or ‘yz’. axes=’xyz’ is the same as axes=None.
scale_factor: it may be that the pre and post populations use different units for position, e.g. degrees and
µm. In this case, scale_factor can be specified, which is applied to the positions in the post-synaptic
population.
offset: if the origins of the coordinate systems of the pre- and post- synaptic populations are different,
offset can be used to adjust for this difference. The offset is applied before any scaling.
18.1. API reference
161
PyNN Documentation, Release 0.8.2
periodic_boundaries: either None, or a tuple giving the boundaries for each dimension, e.g. ((x_min,
x_max), None, (z_min, z_max)).
AXES = {‘xz’: [0, 2], ‘yz’: [1, 2], ‘xy’: [0, 1], None: [0, 1, 2], ‘y’: [1], ‘x’: [0], ‘xyz’: [0, 1, 2], ‘z’: [2]}
distances(A, B, expand=False)
Calculate the distance matrix between two sets of coordinates, given the topology of the current space.
From http://projects.scipy.org/pipermail/numpy-discussion/2007-April/027203.html
distance_generator(f, g)
Implementing your own Shape
Todo
write this
Implementing your own Structure
Todo
write this
18.1.11 Utility classes and functions
init_logging(logfile, debug=False, num_processes=1, rank=0, level=None)
Simple configuration of logging.
get_simulator(*arguments)
Import and return a PyNN simulator backend module based on command-line arguments.
The simulator name should be the first positional argument. If your script needs additional arguments, you can
specify them as (name, help_text) tuples. If you need more complex argument handling, you should use argparse
directly.
Returns (simulator, command-line arguments)
class Timer
For timing script execution.
Timing starts on creation of the timer.
diff(format=None)
Return the time since the last time elapsed_time() or diff() was called.
If called with format=’long’, return a text representation of the time.
elapsedTime(*args, **kwargs)
Deprecated. Use elapsed_time() instead.
elapsed_time(format=None)
Return the elapsed time in seconds but keep the clock running.
If called with format="long", return a text representation of the time. Examples:
162
Chapter 18. API reference
PyNN Documentation, Release 0.8.2
>>> timer.elapsed_time()
987
>>> timer.elapsed_time(format='long')
16 minutes, 27 seconds
mark(label)
Store the time since the last time since the last time elapsed_time(), diff() or mark() was called,
together with the provided label, in the attribute ‘marks’.
reset()
Reset the time to zero, and start the clock.
start()
Start/restart timing.
static time_in_words(s)
Formats a time in seconds as a string containing the time in days, hours, minutes, seconds. Examples:
>>> Timer.time_in_words(1)
1 second
>>> Timer.time_in_words(123)
2 minutes, 3 seconds
>>> Timer.time_in_words(24*3600)
1 day
class ProgressBar(width=77, char=’#’, mode=’fixed’)
Create a progress bar in the shell.
set_level(level)
Rebuild the bar string based on level, which should be a number between 0 and 1.
notify(msg=’Simulation finished.’, subject=’Simulation finished.’, smtphost=None, address=None)
Send an e-mail stating that the simulation has finished.
save_population(population, filename, variables=None)
Saves the spike_times of a population and the size, structure, labels such that one can load it back into a SpikeSourceArray population using the load_population() function.
load_population(filename, sim)
Loads a population that was saved with the save_population() function into SpikeSourceArray.
Basic plotting
class Figure(*panels, **options)
Provide simple, declarative specification of multi-panel figures.
Example:
Figure(
Panel(segment.filter(name="v")[0], ylabel="Membrane potential (mV)")
Panel(segment.spiketrains, xlabel="Time (ms)"),
title="Network activity",
).save("figure3.png")
Valid options are:
settings: for figure settings, e.g. {‘font.size’: 9}
annotations: a (multi-line) string to be printed at the bottom of the figure.
18.1. API reference
163
PyNN Documentation, Release 0.8.2
title: a string to be printed at the top of the figure.
save(filename)
Save the figure to file. The format is taken from the file extension.
class Panel(*data, **options)
Represents a single panel in a multi-panel figure.
A panel is a Matplotlib Axes or Subplot instance. A data item may be an AnalogSignal, AnalogSignalArray, or
a list of SpikeTrains. The Panel will automatically choose an appropriate representation. Multiple data items
may be plotted in the same panel.
Valid options are any valid Matplotlib formatting options that should be applied to the Axes/Subplot, plus in
addition:
data_labels: a list of strings of the same length as the number of data items.
line_properties: a list of dicts containing Matplotlib formatting options, of the same length as the
number of data items.
plot(axes)
Plot the Panel’s data in the provided Axes/Subplot instance.
comparison_plot(segments, labels, title=’‘, annotations=None, fig_settings=None, with_spikes=True)
Given a list of segments, plot all the data they contain so as to be able to compare them.
Return a Figure instance.
164
Chapter 18. API reference
CHAPTER 19
Old documents
19.1 Standard models
Standard models are neuron models that are available in at least two of the simulation engines supported by PyNN.
PyNN performs automatic translation of parameter names, types and units. Only a handful of models are currently
available, but the list will be expanded in future releases. To obtain a list of all the standard models available in a given
simulator, use the list_standard_models() function, e.g.:
>>> from pyNN import neuron
>>> neuron.list_standard_models()
['IF_cond_alpha', 'IF_curr_exp', 'IF_cond_exp', 'EIF_cond_exp_isfa_ista',
'SpikeSourceArray', 'HH_cond_exp', 'IF_cond_exp_gsfa_grr',
'IF_facets_hardware1', 'SpikeSourcePoisson', 'EIF_cond_alpha_isfa_ista',
'IF_curr_alpha']
19.1.1 Neurons
IF_curr_alpha
Leaky integrate and fire model with fixed threshold and alpha-function-shaped post-synaptic current.
Availability: NEST, NEURON, Brian
Name
v_rest
cm
tau_m
tau_refrac
tau_syn_E
tau_syn_I
i_offset
v_reset
v_thresh
Default value
-65.0
1.0
20.0
0.0
5.0
5.0
0.0
-65.0
-50.0
Units
mV
nF
ms
ms
ms
ms
nA
mV
mV
Description
Resting membrane potential
Capacity of the membrane
Membrane time constant
Duration of refractory period
Rise time of the excitatory synaptic alpha function
Rise time of the inhibitory synaptic alpha function
Offset current
Reset potential after a spike
Spike threshold
IF_curr_exp
Leaky integrate and fire model with fixed threshold and decaying-exponential post-synaptic current. (Separate synaptic
currents for excitatory and inhibitory synapses.
165
PyNN Documentation, Release 0.8.2
Availability: NEST, NEURON, Brian
Name
v_rest
cm
tau_m
tau_refrac
tau_syn_E
tau_syn_I
i_offset
v_reset
v_thresh
Default value
-65.0
1.0
20.0
0.0
5.0
5.0
0.0
-65.0
-50.0
Units
mV
nF
ms
ms
ms
ms
nA
mV
mV
Description
Resting membrane potential
Capacity of the membrane
Membrane time constant
Duration of refractory period
Decay time of excitatory synaptic current
Decay time of inhibitory synaptic current
Offset current
Reset potential after a spike
Spike threshold
IF_cond_alpha
Leaky integrate and fire model with fixed threshold and alpha-function-shaped post-synaptic conductance.
Availability: NEST, NEURON, Brian
Name
v_rest
cm
tau_m
tau_refrac
tau_syn_E
tau_syn_I
e_rev_E
e_rev_I
v_thresh
v_reset
i_offset
Default value
-65.0
1.0
20.0
0.0
5.0
5.0
0.0
-70.0
-50.0
-65.0
0.0
Units
mV
nF
ms
ms
ms
ms
mV
mV
mV
mV
nA
Description
Resting membrane potential
Capacity of the membrane
Membrane time constant
Duration of refractory period
Rise time of the excitatory synaptic alpha function
Rise time of the inhibitory synaptic alpha function
Reversal potential for excitatory input
Reversal potential for inhibitory input
Spike threshold
Reset potential after a spike
Offset current
IF_cond_exp
Leaky integrate and fire model with fixed threshold and decaying-exponential post-synaptic conductance.
Availability: NEST, NEURON, Brian
Name
v_rest
cm
tau_m
tau_refrac
tau_syn_E
tau_syn_I
e_rev_E
e_rev_I
v_thresh
v_reset
i_offset
166
Default value
-65.0
1.0
20.0
0.0
5.0
5.0
0.0
-70.0
-50.0
-65.0
0.0
Units
mV
nF
ms
ms
ms
ms
mV
mV
mV
mV
nA
Description
Resting membrane potential
Capacity of the membrane
Membrane time constant
Duration of refractory period
Decay time of the excitatory synaptic conductance
Decay time of the inhibitory synaptic conductance
Reversal potential for excitatory input
Reversal potential for inhibitory input
Spike threshold
Reset potential after a spike
Offset current
Chapter 19. Old documents
PyNN Documentation, Release 0.8.2
HH_cond_exp
Single-compartment Hodgkin-Huxley-type neuron with transient sodium and delayed-rectifier potassium currents using the ion channel models from Traub.
Availability: NEST, NEURON, Brian
Name
gbar_Na
gbar_K
g_leak
cm
v_offset
e_rev_Na
e_rev_K
e_rev_leak
e_rev_E
e_rev_I
tau_syn_E
tau_syn_I
i_offset
Default value
20.0
6.0
0.01
0.2
-63.0
50.0
-90.0
-65.0
0.0
-80.0
0.2
2.0
0.0
Units
uS
uS
uS
nF
mV
mV
mV
mV
mV
mV
ms
ms
nA
Description
EIF_cond_alpha_isfa_ista
Adaptive exponential integrate and fire neuron according to Brette R and Gerstner W (2005) Adaptive Exponential Integrate-and-Fire Model as an Effective Description of Neuronal Activity. J Neurophysiol 94:3637-3642
Availability: NEST, NEURON, Brian
Name
cm
tau_refrac
v_spike
v_reset
v_rest
tau_m
i_offset
a
b
delta_T
tau_w
v_thresh
e_rev_E
tau_syn_E
e_rev_I
tau_syn_I
Default value
0.281
0.0
0.0
-70.6
-70.6
9.3667
0.0
4.0
0.0805
2.0
144.0
-50.4
0.0
5.0
-80.0
5.0
Units
nF
ms
mV
mV
mV
ms
nA
nS
nA
mV
ms
mV
mV
ms
mV
ms
Description
Capacity of the membrane
Duration of refractory period
Spike detection threshold
Reset value for membrane potential after a spike
Resting membrane potential (Leak reversal potential)
Membrane time constant
Offset current
Subthreshold adaptation conductance
Spike-triggered adaptation
Slope factor
Adaptation time constant
Spike initiation threshold
Excitatory reversal potential
Rise time of excitatory synaptic conductance (alpha function)
Inhibitory reversal potential
Rise time of the inhibitory synaptic conductance (alpha function)
19.1.2 Spike sources
SpikeSourcePoisson
Spike source, generating spikes according to a Poisson process.
Availability: NEST, NEURON, Brian
19.1. Standard models
167
PyNN Documentation, Release 0.8.2
Name
rate
start
duration
Default value
0.0
0.0
10^9
Units
s^‘-1‘
ms
ms
Description
Mean spike frequency
Start time
Duration of spike sequence
SpikeSourceArray
Spike source generating spikes at the times given in the spike_times array.
Availability: NEST, NEURON, Brian
Name
spike_times
168
Default value
[]
Units
ms
Description
list or numpy array containing spike times
Chapter 19. Old documents
CHAPTER 20
Indices and tables
• genindex
• modindex
• search
169
PyNN Documentation, Release 0.8.2
170
Chapter 20. Indices and tables
Python Module Index
p
pyNN.connectors, 139
pyNN.parameters, 154
pyNN.random, 152
pyNN.space, 159
171
PyNN Documentation, Release 0.8.2
172
Python Module Index
Index
Symbols
conductance_based (IF_curr_alpha attribute), 145
conductance_based (IF_curr_exp attribute), 145
__div__() (Sequence method), 159
conductance_based (Izhikevich attribute), 145
__getitem__() (LazyArray method), 156
conductance_based (SpikeSourceArray attribute), 147
__getitem__() (ParameterSpace method), 158
conductance_based (SpikeSourceInhGamma attribute),
__iter__() (ParameterSpace method), 158
147
__mul__() (Sequence method), 159
conductance_based (SpikeSourcePoisson attribute), 147
connect() (Connector method), 139
A
connection_type (StandardSynapseType attribute), 148
AdditivePotentiationMultiplicativeDepression (class in
Connector (class in pyNN.connectors), 139
pyNN.standardmodels.synapses), 151
CSAConnector (class in pyNN.connectors), 142
AdditiveWeightDependence
(class
in
Cuboid (class in pyNN.space), 161
pyNN.standardmodels.synapses), 150
AllToAllConnector (class in pyNN.connectors), 139
D
apply() (LazyArray method), 156
default_initial_values (EIF_cond_alpha_isfa_ista atArrayConnector (class in pyNN.connectors), 140
tribute), 146
as_dict() (ParameterSpace method), 158
default_initial_values
(EIF_cond_exp_isfa_ista attribute),
AXES (Space attribute), 162
146
default_initial_values (IF_cond_alpha attribute), 144
B
default_initial_values (IF_cond_exp attribute), 144
BaseStructure (class in pyNN.space), 159
default_initial_values (IF_cond_exp_gsfa_grr attribute),
by_column() (LazyArray method), 156
146
default_initial_values (IF_curr_alpha attribute), 145
C
default_initial_values (IF_curr_exp attribute), 145
calculate_size() (Grid2D method), 160
default_initial_values (Izhikevich attribute), 145
calculate_size() (Grid3D method), 160
default_initial_values (StandardSynapseType attribute),
check_bounds() (LazyArray method), 156
148
CloneConnector (class in pyNN.connectors), 142
default_parameters
(AdditivePotentiationMultiplicacolumns() (ParameterSpace method), 158
tiveDepression attribute), 151
comparison_plot() (in module pyNN.utility.plotting), 164 default_parameters (AdditiveWeightDependence atcomputed_parameters() (StandardCellType method), 144
tribute), 150
computed_parameters() (StandardSynapseType method), default_parameters (EIF_cond_alpha_isfa_ista attribute),
148
146
conductance_based (EIF_cond_alpha_isfa_ista attribute), default_parameters (EIF_cond_exp_isfa_ista attribute),
146
146
conductance_based (EIF_cond_exp_isfa_ista attribute), default_parameters (GutigWeightDependence attribute),
146
151
conductance_based (IF_cond_alpha attribute), 144
default_parameters (IF_cond_alpha attribute), 144
conductance_based (IF_cond_exp attribute), 144
default_parameters (IF_cond_exp attribute), 144
conductance_based (IF_cond_exp_gsfa_grr attribute), default_parameters (IF_cond_exp_gsfa_grr attribute),
146
146
173
PyNN Documentation, Release 0.8.2
default_parameters (IF_curr_alpha attribute), 145
default_parameters (IF_curr_exp attribute), 145
default_parameters (Izhikevich attribute), 145
default_parameters (MultiplicativeWeightDependence attribute), 151
default_parameters (SpikePairRule attribute), 151
default_parameters (SpikeSourceArray attribute), 147
default_parameters (SpikeSourceInhGamma attribute),
147
default_parameters (SpikeSourcePoisson attribute), 147
default_parameters (StandardSynapseType attribute), 148
default_parameters (StaticSynapse attribute), 149
default_parameters (TsodyksMarkramSynapse attribute),
149
describe() (BaseStructure method), 159
describe() (Connector method), 139
describe() (GSLRNG method), 153
describe() (NumpyRNG method), 153
describe() (SpikePairRule method), 151
describe() (StandardCellType method), 144
describe() (StandardSynapseType method), 148
describe() (STDPMechanism method), 150
describe() (STDPWeightDependence method), 150
diff() (Timer method), 162
DisplacementDependentProbabilityConnector (class in
pyNN.connectors), 142
distance_generator() (Space method), 162
DistanceDependentProbabilityConnector
(class
in
pyNN.connectors), 141
distances() (Space method), 162
E
EIF_cond_alpha_isfa_ista
(class
in
pyNN.standardmodels.cells), 146
EIF_cond_exp_isfa_ista
(class
in
pyNN.standardmodels.cells), 145
elapsed_time() (Timer method), 162
elapsedTime() (Timer method), 162
environment variable
PYTHONPATH, 134
evaluate() (LazyArray method), 156
evaluate() (ParameterSpace method), 158
expand() (ParameterSpace method), 159
extra_parameters (StandardSynapseType attribute), 148
FixedTotalNumberConnector (class in pyNN.connectors),
141
FromFileConnector (class in pyNN.connectors), 140
FromListConnector (class in pyNN.connectors), 140
G
gamma() (GSLRNG method), 153
generate_positions() (BaseStructure method), 159
generate_positions() (Grid2D method), 160
generate_positions() (Grid3D method), 161
generate_positions() (Line method), 160
generate_positions() (RandomStructure method), 161
get_native_names() (StandardCellType method), 143
get_native_names() (StandardSynapseType method), 148
get_parameter_names() (StandardCellType method), 143
get_parameter_names() (StandardSynapseType method),
148
get_parameter_names() (STDPMechanism method), 150
get_parameters() (BaseStructure method), 159
get_parameters() (Connector method), 139
get_schema() (StandardCellType method), 143
get_schema() (StandardSynapseType method), 148
get_schema() (STDPMechanism method), 150
get_simulator() (in module pyNN.utility), 162
Grid2D (class in pyNN.space), 160
Grid3D (class in pyNN.space), 160
GSLRNG (class in pyNN.random), 153
GutigWeightDependence
(class
in
pyNN.standardmodels.synapses), 151
H
has_native_rngs (ParameterSpace attribute), 159
has_parameter() (StandardCellType method), 143
has_parameter() (StandardSynapseType method), 148
has_parameter() (STDPMechanism method), 150
has_presynaptic_components (StandardSynapseType attribute), 148
I
IF_cond_alpha (class in pyNN.standardmodels.cells), 144
IF_cond_exp (class in pyNN.standardmodels.cells), 144
IF_cond_exp_gsfa_grr
(class
in
pyNN.standardmodels.cells), 146
IF_curr_alpha (class in pyNN.standardmodels.cells), 145
IF_curr_exp (class in pyNN.standardmodels.cells), 144
F
IndexBasedProbabilityConnector
(class
in
pyNN.connectors), 142
Figure (class in pyNN.utility.plotting), 163
FixedNumberPostConnector (class in pyNN.connectors), init_logging() (in module pyNN.utility), 162
injectable (EIF_cond_alpha_isfa_ista attribute), 146
141
FixedNumberPreConnector (class in pyNN.connectors), injectable (EIF_cond_exp_isfa_ista attribute), 146
injectable (IF_cond_alpha attribute), 144
140
FixedProbabilityConnector (class in pyNN.connectors), injectable (IF_cond_exp attribute), 144
injectable (IF_cond_exp_gsfa_grr attribute), 146
140
injectable (IF_curr_alpha attribute), 145
174
Index
PyNN Documentation, Release 0.8.2
injectable (IF_curr_exp attribute), 144
injectable (Izhikevich attribute), 145
injectable (SpikeSourceArray attribute), 147
injectable (SpikeSourceInhGamma attribute), 147
injectable (SpikeSourcePoisson attribute), 147
is_homogeneous (LazyArray attribute), 156
is_homogeneous (ParameterSpace attribute), 158
items() (ParameterSpace method), 158
Izhikevich (class in pyNN.standardmodels.cells), 145
keys() (ParameterSpace method), 158
parameter_names (RandomStructure attribute), 161
parameter_space (STDPMechanism attribute), 150
ParameterSpace (class in pyNN.parameters), 157
plot() (Panel method), 164
pop() (ParameterSpace method), 158
possible_models (STDPMechanism attribute), 150
ProgressBar (class in pyNN.utility), 163
pyNN.connectors (module), 139
pyNN.parameters (module), 154
pyNN.random (module), 152
pyNN.space (module), 159
PYTHONPATH, 134
L
R
K
RandomDistribution (class in pyNN.random), 152
RandomStructure (class in pyNN.space), 161
receptor_types (SpikeSourceArray attribute), 147
receptor_types (SpikeSourceInhGamma attribute), 147
receptor_types (SpikeSourcePoisson attribute), 147
M
recordable (EIF_cond_alpha_isfa_ista attribute), 146
mark() (Timer method), 163
recordable (EIF_cond_exp_isfa_ista attribute), 146
max() (Sequence method), 159
recordable (IF_cond_alpha attribute), 144
model (STDPMechanism attribute), 150
recordable (IF_cond_exp attribute), 144
MultiplicativeWeightDependence
(class
in
recordable (IF_cond_exp_gsfa_grr attribute), 146
pyNN.standardmodels.synapses), 150
recordable (IF_curr_alpha attribute), 145
recordable (IF_curr_exp attribute), 145
N
recordable (Izhikevich attribute), 145
native_parameters (StandardSynapseType attribute), 148 recordable (SpikeSourceArray attribute), 147
native_parameters (STDPMechanism attribute), 150
recordable (SpikeSourceInhGamma attribute), 147
NativeRNG (class in pyNN.random), 153
recordable (SpikeSourcePoisson attribute), 147
ncols (LazyArray attribute), 156
reset() (Timer method), 163
next() (GSLRNG method), 153
reverse_translate() (StandardCellType method), 144
next() (NativeRNG method), 153
reverse_translate() (StandardSynapseType method), 148
next() (NumpyRNG method), 153
next() (RandomDistribution method), 152
S
normal() (GSLRNG method), 153
sample() (Cuboid method), 161
normal_clipped() (GSLRNG method), 153
sample() (Sphere method), 161
normal_clipped() (NumpyRNG method), 153
save() (Figure method), 164
normal_clipped_to_boundary() (NumpyRNG method),
save_population() (in module pyNN.utility), 163
153
scaled_parameters() (StandardCellType method), 144
notify() (in module pyNN.utility), 163
scaled_parameters() (StandardSynapseType method), 149
nrows (LazyArray attribute), 156
Sequence (class in pyNN.parameters), 159
NumpyRNG (class in pyNN.random), 152
set_level() (ProgressBar method), 163
shape (LazyArray attribute), 156
O
shape (ParameterSpace attribute), 158
OneToOneConnector (class in pyNN.connectors), 140
simple_parameters() (StandardCellType method), 144
simple_parameters() (StandardSynapseType method),
P
149
Panel (class in pyNN.utility.plotting), 164
size (LazyArray attribute), 156
parallel_safe (ParameterSpace attribute), 159
SmallWorldConnector (class in pyNN.connectors), 142
parameter_checks (StandardSynapseType attribute), 148 Space (class in pyNN.space), 161
parameter_names (Grid2D attribute), 160
Sphere (class in pyNN.space), 161
parameter_names (Grid3D attribute), 160
SpikePairRule (class in pyNN.standardmodels.synapses),
parameter_names (Line attribute), 160
151
LazyArray (class in pyNN.parameters), 155
Line (class in pyNN.space), 160
load_population() (in module pyNN.utility), 163
Index
175
PyNN Documentation, Release 0.8.2
SpikeSourceArray (class in pyNN.standardmodels.cells),
147
SpikeSourceInhGamma
(class
in
pyNN.standardmodels.cells), 147
SpikeSourcePoisson
(class
in
pyNN.standardmodels.cells), 147
StandardCellType (class in pyNN.standardmodels), 143
StandardSynapseType (class in pyNN.standardmodels),
148
start() (Timer method), 163
StaticSynapse (class in pyNN.standardmodels.synapses),
149
STDPMechanism
(class
in
pyNN.standardmodels.synapses), 149
STDPTimingDependence
(class
in
pyNN.standardmodels), 151
STDPWeightDependence
(class
in
pyNN.standardmodels), 150
T
time_in_words() (Timer static method), 163
Timer (class in pyNN.utility), 162
translate() (StandardCellType method), 143
translate() (StandardSynapseType method), 149
translations (GSLRNG attribute), 153
translations (NumpyRNG attribute), 153
translations (StandardSynapseType attribute), 149
TsodyksMarkramSynapse
(class
pyNN.standardmodels.synapses), 149
in
U
uniform_int() (GSLRNG method), 153
units (EIF_cond_alpha_isfa_ista attribute), 146
units (EIF_cond_exp_isfa_ista attribute), 146
units (IF_cond_alpha attribute), 144
units (IF_cond_exp attribute), 144
units (IF_cond_exp_gsfa_grr attribute), 146
units (IF_curr_alpha attribute), 145
units (IF_curr_exp attribute), 145
units (Izhikevich attribute), 145
update() (ParameterSpace method), 158
V
voltage_based_synapses (Izhikevich attribute), 145
176
Index
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement