Title Placeholder - Interdisciplinary Centre for Computer Music

Title Placeholder - Interdisciplinary Centre for Computer Music
The Fragmented Orchestra
Jane Grant
Tim Hodgson
Daniel Jones
University of Plymouth, UK
Goldsmiths, University of London, UK
[email protected]
[email protected]
[email protected]
Nick Ryan
John Matthias
Nicholas Outram
University of Plymouth, UK
University of Plymouth, UK
[email protected]
[email protected]
[email protected]
Abstract
The Fragmented Orchestra is a distributed musical instrument which combines live audio streams from geographically disparate sites, and granulates each according to the
spike timings of an artificial spiking neural network. This
paper introduces the work, outlining its historical context,
technical architecture, neuronal model and network infrastructure, making specific reference to modes of interaction
with the public.
Keywords: distributed, installation, sound, neural network,
streaming audio, emergent, environmental.
1. Introduction
Figure 1. Photograph of the installation
The Fragmented Orchestra is a huge, distributed musical instrument, modelled on the firing of the human brain’s neurons. This collaborative work, spanning music, art and science, has evolved from its creators’ fascination with the inherent rhythms and adaptive learning of spiking neurons.
The piece is made up of one central space (installed, at the
time of writing, in Liverpool’s FACT gallery) and 24 geographically disparate sites across the UK, acting as the neurons of a distributed virtual cortex.
Each of the sites has a “soundbox” unit installed, containing a microphone, a speaker and a computer. This streams
live audio via the internet to the central installation; whenever an audio threshold is reached, the neuron corresponding to the site ‘fires’, causing a fragment of human-made or
elemental sound to be broadcast through the gallery. The
combined sound of the 24 speakers at the gallery is continuously transmitted back to each of the 24 sites.
As well as triggering a grain of sound, each firing event
is simultaneously communicated to the rest of the cortex.
Using a 2-dimensional spiking neuronal model [1], correla-
tions in sound activity at the sites cause relationships to develop between connected neurons. The system “learns” over
time how to respond to new patterns of sonic stimuli, resulting in co-ordinated cascades and and crescendos of sound.
The geographical sites were selected for their sonic qualities, with the intention of building up a portrait of the UK’s
varied soundscapes. These range from inner city traffic,
chanting from sports stadia, the hubbub of cattle auctions,
and the chatter of migrating birds, combined with incidental and performed sounds from members of the public at
concert venues such as Gloucester Cathedral and Belfast’s
Sonic Arts Research Centre. The public, invited to play the
instrument at the 24 sites, can hear the effect their playing
has on the overall composition of the piece at each site and
at FACT. As members of the public use the instrument, they
become both player and audience of a vast and evolving musical composition extended across the UK.
This paper outlines and assesses a number of key aspects
of The Fragmented Orchestra. Section 2 situates the project
in its artistic and historical context; Section 3 describes the
neuronal network at the core of the piece; Section 4 covers
the architecture of the server that hosts the neuronal network and connects it to each of the broadcast sites; Section 5 outlines the network and communications infrastructure that hosts these sites, and the possibilities for interaction at the sites themselves; Section 6 describes the web in-
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies
are not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. To copy
otherwise, to republish, to post on servers, or to redistribute to lists
requires prior specific permission and/or a fee.
NIME09, June 3-6, 2009, Pittsburgh, PA
Copyright remains with the author(s).
PREPRESS PROOF FILE
1
CAUSAL PRODUCTIONS
no control or are we its composers and performers, responsible for giving it form and beauty?” [3]
terface, which provides live access to the piece for a global
audience; and Section 7 assesses the outcome of the piece
and suggests directions for future work. Finally, Section 8
briefly summarises the ancestry of the project.
3. Neuronal Model
The majority of the processing of sensory information in our
brains is thought to take place in the cerebral cortex, which
contains a population of billions of neurons, each making
thousands of synaptic connections with its neighbours. A
single neuron can be thought of as a cell which generates
a travelling spiking signal to these connected neighbours
when the voltage on its membrane exceeds a certain threshold voltage, a process which is called ‘firing’ or ‘spiking’. A
neuron receiving several spikes simultaneously (or within a
very small time-window) is likely to have its voltage pushed
beyond the threshold level and will therefore in turn, send
spike signals to its connected neighbours. Furthermore, connections between most neurons which cause spiking signals tend to become potentiated and those which do not become depleted, a phenomenon known as ‘Synaptic Plasticity’. The dynamics of millions of such adaptive, interconnected neurons thus provides an extremely rich behaviour,
especially on a collective level of description (for a comprehensive introduction, see Gerstner and Kistler [4]) and
patterns of firing regularly occur in groups of neurons. This
is illustrated in Figure 2, taken from a mathematical simulation of a group of 1000 coupled neurons. For an example of
such collective firing behaviour, see [1].
The neurons are numbered on the ordinate y-axis (with
neuron number 1 at the bottom, and neuron number 1000
at the top) and the time, which runs from zero to 1000 milliseconds (or one second), is on the x-axis. This is therefore
a simulation of one second’s activity of this group of 1000
artificial neurons. Every time a neuron fires, a dot is placed
on the graph at the appropriate time on a line horizontally
drawn from that particular neuron. The dots on the graph
can thus be regarded as firing ‘events’. In the particular
graph shown, because of the plasticity of the neural connections, many of the events are centred in four bands, which
appear as a pulse or ‘wave’ of spiking events in real-time.
The spiking events are indeterminate (not predictable in
advance) but are certainly not randomly distributed, and, as
is the case with the above scenario, can be highly correlated. A rhythmic pattern such as the one pictured above
is likely to be connected with the ‘polychronous’ firing of
a particular group of neurons [1], in which the firing of
a particular neuron generates a sequence of events which
stimulate a large number of neurons, which form a closed
group with the connections between these neurons being
reinforced through repeated firing of the first neuron (the
group of neurons fire not in synchrony but with polychrony).
It is the musicality of these sequences of firing events, or
rhythms, which we are interested in. In our research (see, for
example, Miranda and Matthias [5], Grant et al [6], Matthias
and Ryan [7]) we have started to look at what happens when
2. Historical Context
The Fragmented Orchestra can be partially situated in a broad
heritage of nondeterministic, environmental sound works,
which draw upon the unpredictable nature of their surroundings to introduce an element of chance. From the Aeolian harp to Max Eastley’s sound sculptures, through Alvin
Lucier’s explorations of acoustic phenomena, the surrounding world frequently proves a ripe source of new auditory
experiences. Alan Licht provides an excellent overview of
music and the environment in Sound Art, in which he asserts that, fundamentally, “sound art comes from the appreciation of the total environment of sounds, both wanted and
unwanted” [2, p116].
Licht also discusses the post-Futurist movement of urban
soundscapes into the concert hall. Equivalently, we consider
the piece as opening a set of conduits from the outside world
into the gallery, through which atmospheric sound can flow.
In doing so, it is dislocated from its context and subjected
to a new kind of listening; rather than being filtered out
as noise, short fragments of elemental sound (street noise,
café chatter, the buzzing of an unattended fridge) are transformed into objects of attention in their own right. This
immediate use of soundscape brings to mind the elemental works of John Cage, particularly Variations VII (1966),
which utilised telephone lines as a new source of hidden
sonic activity.
However, The Fragmented Orchestra consciously seeks
to develop a relationship with the participants: the human
players of the instrument. In practice, many of these performers were accidental, being pulled in by the relayed sound
at the 24 sites. These unwitting participants would begin a
call and response relationship with the other 23 sites, which
would often shift in nature and complexity in parallel with
whatever else was being heard through the ‘cortex’ at that
time. Other more structured and curated performances have
taken place, with participants working both singularly and
collaboratively. Improvisations evolved over many hundreds
of miles with both spoken word performers and classical
musicians. A soundbox was sited just outside the main performance/installation space at FACT in Liverpool. Here,
participants often sang or spoke to the microphone and then
ran in to the performance space to hear their sound ricochet around the gallery, in combination with sounds from
the other sites.
Even in a situation in which a human participant is unaware of the existence of a nearby soundbox, The Fragmented Orchestra calls into question notions of musical authorship. As R. Murray Schafer asks, “Is the soundscape of
the world an indeterminate composition over which we have
2
1
1
1
FragRX 1
FragRX 2
FragRX 24
1
1
1
1
Mixer: inputs
24 buses of 1 ch each
1
1
MatrixMixer1
from Soundboxes
24
24
SpikingNet
48
48
GSampler
24
24
Compressor
24
MatrixMixer2:
Output 25 buses, 50 channels.
First ch in each of first 24 output buses
is processed output from GSampler,
second ch is mono mix of all 24
channels.
24
MatrixMixer2
Mixer: soundbox mix
2 2
2
2
to Soundboxes
Bus25 of MatrixMixer2
carries mono monitor mix
on 2 channels to MatrixMixer4
Figure 2. Spike timing graph. x = time (ms), y = neuron index.
2
2
FragTX 1
FragTX 2
1
1
FragTX:
2
FragTX 24 input ch 1 receives
corresponding soundbox's
audio; AU sends it to output
channel and on to MM3.
input ch 2 receives mono
mix of all 24 channels;
AU sends it out to
soundBoxes.
1
each firing event is represented by a sonic event.
1
1
1
1
1
2
MatrixMixer4
1
1
26 buses of 1 ch each
MatrixMixer3
3.1. The Fragmented Orchestra
26
Mixer: gallery mix
In the case of The Fragmented Orchestra, each event is transformed into a short sample of sound streamed from one of
the 24 sites. The cortex is tiny, consisting of just 24 neurons,
but is sufficiently large to achieve many complex rhythms
and collective firing behaviours. Each site is represented
by its own neuron, which is also stimulated by the sound
from that site. The samples of sound are relayed in realtime and last between 30 milliseconds and 2 seconds. We
have adapted a mathematical model of biological neuronal
networks developed by Eugene Izhikevich and others [1],
which is one of many models of spiking neuronal networks
which are based on a simplification of a model developed
by Hodgkin and Huxley in the 1950s [8]. These models are
essentially electrical in nature and consider the relationships
between the flow of ions across cell membranes and the flow
of voltage signals between the cells. In a sense, we have created a hybrid organism, involving 24 cortical neurons which
are also all crude sensory neurons (they are stimulated by
the ‘volume’ of the sound). This tiny organism is then distributed across the 24 sites and transformed into a musical
instrument.
One of the essential attributes of the musical interface is
the relationship between the stimulation of the neuron by the
audio in the site and the plasticity in the neuronal network. If
the amplitude of the audio in a particular site causes the neuronal membrane voltage to exceed a certain threshold value,
the neuron associated with that site will fire. This will cause
‘spike’ voltage signals to be sent to all of the other neurons
(we have an all-to-all topology), which may also fire if their
membrane voltage is above the threshold. Furthermore, a
synaptic plasticity algorithm [1] ensures that causal firing
is encouraged by the enhancement of corresponding interneuronal connections.
Mixer: Mono monitor volume
26
AUHAL
34
MOTU 24I/O + M-Audio FW410
Figure 3. Server architecture
The interplay between the stimulation of the neurons by
external forces and the network dynamics associated with
synaptic plasticity is particularly interesting. The resulting
temporal firing dynamics have the potential to be much more
correlated and sophisticated than, say, a pink noise generator
exhibiting scale-symmetric correlations. There is, however,
a trade off to be considered: heavy stimulation knocks the
system out of its ‘settling’, and so a very stimulated network
has less chance of exhibiting the correlations in rhythmic
dynamics associated with the global and long-term operation of synaptic plasticity.
4. Server Architecture
The neural network itself is hosted by a Mac OS X server
located at the installation site. The server application takes
the form of a graph of Audio Units, which handle the network streaming as well as the sound processing. The core of
the audio processing is handled by two units, one containing the spiking neural network, and the other containing the
granular sampler.
The network is based upon the Izhekevich model outlined above [1], taking its input from the physical network
of audio sources plus its internal “noise”. For each of these
neurons, the unit sends two streams of data onwards through
the chain, the first carrying the audio signal, the second con-
3
taining spiking events generated by the neural network in
response to both the direct stimulus to that neuron, and to
activity elsewhere in the network.
The granular sampler is in many ways typical of this kind
of unit: it selects ‘grains’ of sound from the input, of lengths
varying from 30ms up to a second or more. These grains are
output in a rearranged and layered form, with the extent of
this rearrangement dependent on the chosen system settings.
The important difference with this granular sampler is
that the timing and density of the grains is determined by the
activity of the neural network, which in turn is partially determined by sonic events at the soundboxes (‘partially’ because the complex internal dynamics of the neural network
mean that its behaviour is very far from a simple mapping
of input to output).
A graphical user interface to the server application allows the parameters of the system to be modified in realtime. Amongst others, these include the spiking threshold
for each neuron, the output volume of each channel, and an
adjustable per-channel delay buffer, from which the sampler
can draw grains, introducing a temporal ‘shuffling’ effect to
reorder the input. However, these parameters are not intended for frequent modulation when the piece is running;
rather, they are intended as a way to fine-tune the initial parameters of a generative system, ensuring that the density
of output is not too low or too great. The subsequent operation should primarily be governed by the internal neural
plasticity.
soundbox
soundbox
ht
tp
3
mp a
dat
central server
(FACT)
icecast
media server
for web broadcast
web
client
web
client
web server
soundbox
soundbox
web
client
Figure 4. Network architecture
comprehensively configure and control the system remotely.
Given these requirements, a piece of software to perform
these tasks was written from scratch, developed over several
months and thoroughly tested for reliability and stability.
This has a number of key features, most of which are
configurable for each soundbox.
Network connection monitor. The state of the network connection is continually monitored by a dedicated thread,
whose job it is to destroy stale connections and reopen the connection as rapidly as possible.
5. Soundbox network
The Fragmented Orchestra is fundamentally a distributed
system, comprised of an interconnected network of communication nodes. Given that these sites are scattered throughout the length and breadth of the UK, it is entirely reliant on
the availability of a network infrastructure that is capable of
transmitting audio data in real-time over a great distance. Indeed, this kind of project has only been rendered technically
feasible in recent years, courtesy of the rapidly accelerating
rate of consumer-grade internet connectivity.
The architecture of The Fragmented Orchestra’s network
is illustrated in Figure 4. The central server, which performs
the neuro-granular processing, is located in the gallery space
at FACT. This is connected via the internet to the 24 soundbox sites, each of which has a sufficiently fast broadband
connection to relay audio streams in both directions in realtime.
This installation must be operational 24 hours a day, throughout its 3-month duration, so it is also vital that this software
is as stable and resilient as possible. It must be able to cope
with significant network delays and outages and must be
configurable on a per-site basis according to specific needs;
for example, one site must be scheduled to reduce its volume at night to avoid disturbing nearby residents. Moreover, due to the geographical dispersion, we must be able to
4
Data buffering. When the client first successfully makes
its audio connection to the server, it prebuffers around
10 seconds of audio data. This ensures that a backlog
of audio is always available in case of network jitter
or lag.
Audio cache. The data buffer also serves a secondary role:
if the network connection is lost altogether, the contents of this buffer are output from the speaker on a
continual loop (and similarly at the network reception
point at the server-side). Thus, some representative
sound can continue to be played whilst awaiting the
return of network connectivity.
Outage monitor. A system-wide scheduler is operational
to reboot the computer in the worst-case scenario of
an unrecoverable network outage or other nonfatal system failure.
Due to the infrastructure’s foundation on consumer broadband lines, which do not provide a consistent, guaranteed
level of service, the buffering described above is a necessary feature to prevent frequent cut-outs in the audio stream.
This introduces an element of latency to the interaction at
soundbox sites; a sound event is not echoed back for several seconds or more. Though not intentional, this serves to
accentuate the vast distances travelled by the audio signals
over the geographical network.
Indeed, a fundamental part of the character of interacting
via The Fragmented Orchestra is found in this latency. As
part of a set of performances taking place based upon the
project, a relationship developed between the Bristol Watershed site and the site at Belfast’s Sonic Arts Research
Centre, resulting in a distributed, collaborative performance
mediated through The Fragmented Orchestra. Both parties
had to evolve structures and strategies that dealt both with
the fragmentation of their overall contribution and the time
delay relative to their performance partner.
In the programme for the first performance of his piece
Variations VII, John Cage discusses this class of novel, technicallyinduced situation: “The technical problems involved in any
single project tend to reduce the impact of the original idea,
but in being solved they produce a situation different than
anyone could have pre-imagined.” [9].
5.1. Interactivity at the Sites
A sonic event at one of the sites will be fed into the neural server system; after a delay equal to twice the network
buffer size, the output of the neural network will then be output from the loudspeaker at the site. Based on this feedback
loop, a visitor to one of the sites can ‘play’ The Fragmented
Orchestra as an interactive instrument, perceiving their actions as mediated through this chain of pathways.
This simple call-response behaviour itself seems to provoke a significant amount of engagement with a public audience; there is a great fascination in the process of creating a sound and hearing its echoes return, having travelled
through a web of routes spanning the nation. However, there
is a greater degree of reactivity throughout the network than
is immediately evident. Because of the interconnected nature of the neural network that governs the audio spike timings, audio events at one site can implicitly become associated with those at others, based on dynamic correlation
patterns between the sites. As the system continues to run,
and more of these connections evolve, this results in sound
events at one site triggering those at another in synchrony –
an audible example of the “pulse” or wave phenomena described in Section 3.
It is consequently possible to create an auditory cascade
of events, the effect of which is akin to hearing echoes of
sound from the recent past in distant corners of the country.
In a learning system such as this, each event does not only
have an immediate effect but also has consequences for the
long-term behaviour of the system.
6. Web Interface
For those unable to visit the sites that make up The Fragmented Orchestra’s physical architecture, the piece can be
accessed using a web interface designed and developed for
the purpose [10]. The interface is an Adobe Flash applica-
5
tion, based upon the sparse, geometric aesthetic that characterised the installation’s visual identity, and integrated with
the rest of the website content. Indeed, the interface is the
first content that the user is presented with, foregrounding
the content of the piece above the supporting information
that the remainder of the site provides; it was the intention
to create an immediate, immersive experience, free of visual distractions and focusing the visitor on the sound of the
work.
The experience itself is markedly different to that of visiting the exhibition site, though modelled on a similar concept. The user is presented with a network of nodes, each
corresponding to a soundbox site, joined together by lines in
a similar way to a diagrammatic neural network. A minimal
set of control buttons enables the user to pause the piece,
add and remove nodes, and switch to a full-screen display
mode. No other visual elements are present, with the exception of a visual aura surrounding each node to indicate its
sound state.
Consider the space that the nodes inhabit as a 2D plane.
Each node conceptually outputs a live audio stream from its
site to a limited area around it in this plane; the user then
plays the role of a ‘listener’ within this space, whose position is determined by the mouse cursor. The amplitude and
stereo panning of each of the nearby nodes is then modulated in real-time to create the impression of virtual soundsource positioning, with the analogy that the user is moving
through the 2D space between this network of speakers – in
much the same way that the visitor to the physical installation configures their experience of the work by moving
between the speakers. This technique was tested and developed for AtomSwarm [11], an earlier performance piece by
Daniel Jones, and was found to be a deceptively effective
way to immerse the listener within the space of the composition itself.
Here, however, it is possible to add, remove and reposition nodes at will, so the user can select their own palette
of sound sources from the array of varied sites. A choral
performance at Gloucester Cathedral can thus be combined
live with street sounds from Bristol, transforming The Fragmented Orchestra into an interactive instrument that can be
used to create novel and unique compositions by members
of the public.
It is critical to state that we view the website as complementary to the physical installation rather than as a strictly
alternative method of experiencing the piece. Though efforts were made to optimize the immersiveness of the interface, it is a radically different experience to the cocoonlike physical immersion of the 24-channel surround system,
whose embodiment excludes all other sensory stimuli to allow for complete focus on the sonic affects.
A common observation made by visitors to the installation is that the piece benefits from listening at various times
of day at night. Given that the gallery is only ordinarily
written by John Matthias and Nick Ryan [7], which was released on Nonclassical records in 2008. In this work, artificial spiking neurons control a set of lights, which flash when
the neurons fire. The score is a combination of instructions
to performers following flashing events, and conventional
notation. At the same time, Jane Grant and Tim Hodgson
further extended the Neurogranular Sampler, concentrating
on larger grain duration, high firing density, and the creation
of a working interface for the instrument. The instrument
formed the basis of Jane Grant’s video/audio work Threshold [14] which merges the spike timings with the sound of
voice and breath.
open during daylight hours, it is here that we feel that the
24/7 web access point succeeds in genuinely broadening the
scope of the piece.
7. Conclusion and Future Work
At the time of writing, one month remains of The Fragmented Orchestra’s 10-week tenure at FACT. The public
response has been overwhelmingly one of curious engagement; much of the the broadsheet coverage [12] has tended
towards the explanatory rather than the critical, outlining the
concepts, context and architecture behind the system. However, as is recognized by this article, the experience of the
installation itself is actually wholly intuitive and uncomplicated; without any technical interface as such, the visitor is
left simply to wander between the 24 ceiling-hung speakers,
configuring their own unique experience of the installation
through their location.
Timing is also critical: it quickly becomes apparent that,
as David Stubbs notes in The Wire [13], the sonic signature of the UK – and thus the installation – alters significantly according to the time of day, with inevitable earlymorning lulls, lunchtime buzz and chatter, and a greater
number of intentional performances towards the evening.
Though this does mean that the qualities of the experience
will vary wildly according to the sound events at each of
the sites, this is also one of its most rewarding features. As
Stubbs comments [13]:
References
[1] E.M. Izhikevich, J.A. Gally and G.M. Edelman, “Spiketiming dynamics of neuronal groups,” Cerebral Cortex no.
14, pp. 933-944, 2004.
[2] A. Licht, Sound Art, New York: Rizzoli, 2007.
[3] R.M. Schafer, “The Music of the Environment,” in Audio
Culture, C. Cox and D. Warner, eds. London: Continuum,
2004.
[4] W. Gerstner and W. Kistler, Spiking Neuron Models: Single
Neurons, populations and plasticity, Cambridge: Cambridge
University Press, 2002.
[5] E.R. Miranda and J.R. Matthias, “Granular Sampling using
a Pulse-Coupled Network of Spiking Neurons,” Proceedings
of EvoWorkshops 2005, Lecture Notes in Computer Science
3449, pp. 539-544. Berlin: Springer-Verlag, 2005.
[6] J. Grant et al, “Hearing Thinking,” Proceedings of EvoWorkshops 2009, in print.
[7] J. Matthias and N. Ryan, Cortical Songs (CD), London:
Nonclassical Records, 2008.
[8] A.L. Hodgkin and A. F. Huxley, “A quantitative description
of ion currents and its applications to conduction and excitation in nerve membranes,” J. Physiology no. 117, pp 500544, 1952.
[9] J. Cage, 9 Evenings: Theatre and Engineering [Souvenir
program]. New York, 1966.
[10] “The Fragmented Orchestra”, [Web site] 2008, [2009 Apr
10], Available: http://www.thefragmentedorchestra.com/
[11] D. Jones, “AtomSwarm: A Framework for Swarm Improvisation,” Proceedings of EvoWorkshops 2008, Lecture Notes
in Computer Science 4974, pp. 423-432.
[12] A.
Hickling,
“Noises
off,”
The
Guardian,
19
December
2008.
Available:
http://www.guardian.co.uk/music/2008/dec/19/fragmentedorchestra-fact-liverpool
[13] D. Stubbs, “Ding>>Dong,” The Wire, February 2009.
[14] J. Grant Threshold, UK: Artsway Gallery, 2008.
“So fitful and sporadic are these sounds that it’s
left to chance just how well you’ll be rewarded
from your visit here - but there is a strange, radio ham’s delight when, for instance, the sounds
of a kid’s party in a London gallery break down
the wires, unspoiled even in this era of media
supersaturation.”
8. Origins of the Project
The Fragmented Orchestra was conceived by artist Jane Grant,
physicist, musician and composer John Matthias and composer Nick Ryan. Network and server engineering were coordinated by Daniel Jones and Tim Hodgson respectively,
and the project’s visual identity was enabled by the Londonbased Kin Design. The Fragmented Orchestra was awarded
the PRS Foundation New Music Award 2008, whose funding part-enabled its realization.
Its concepts and technologies arose from a web of related
projects. In 2004, John Matthias and Eduardo Miranda,
based at the University of Plymouth, developed the Neurogranular Sampler [5], an instrument which triggers grains
of sound from prerecorded sound files when artificial spiking neurons fire in an Izhikevich network. These initial experiments formed a large part of the piece Cortical Songs,
6
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Related manuals

Download PDF

advertisement