University of Huddersfield Repository

University of Huddersfield Repository
University of Huddersfield Repository
Adkins, Monty and Segretier, Laurent
Skyler and Bliss
Original Citation
Adkins, Monty and Segretier, Laurent (2015) Skyler and Bliss. In: xCoAx 2015: Proceedings of the
Third Conferenc on Computation, Communication, Aesthetics and X. Universidade do Porto, Porto,
p. 311. ISBN 978-989-746-066-1
This version is available at http://eprints.hud.ac.uk/25390/
The University Repository is a digital collection of the research output of the
University, available on Open Access. Copyright and Moral Rights for the items
on this site are retained by the individual author and/or other copyright owners.
Users may access full items free of charge; copies of full text items generally
can be reproduced, displayed or performed and given to third parties in any
format or medium for personal research or study, educational or not-for-profit
purposes without prior permission or charge, provided:
•
•
•
The authors, title and full bibliographic details is credited in any copy;
A hyperlink and/or URL is included for the original metadata page; and
The content is not changed in any way.
For more information, including our policy and submission procedure, please
contact the Repository Team at: [email protected]
http://eprints.hud.ac.uk/
Proceedings of the third Conference on
Computation, Communication, Aesthetics and X.
xCoAx 2015: Proceedings of the Third Conference
on Computation, Communication, Aesthetics and X.
Edited by
Design
Alison Clifford
Ana Miguel Reis
Miguel Carvalhais
Miguel Carvalhais
Mario Verdicchio
Joana Morgado
Published by
Webdesign
Universidade do Porto
Ana Ferreira
Miguel Carvalhais
Organizing Committee
André Rangel
Photography
Alison Clifford
Pedro Tudela
Graeme Truslove
Stephen Blackwell
Jason Reizner
Mario Verdicchio
Volunteers
Miguel Carvalhais
Jacek Kilian
Pedro Tudela
Nikolaos Kampitsis
Poli Petrova
Local Organizing Committee
Richard Stevenson
Alison Clifford
Robert Tranter
Graeme Truslove
Stephen Blackwell
Steven Sherlock
Panel Moderators
Exhibition Setup
Alison Clifford
Luís Nunes
Jason Reizner
Luísa Ribas
Light Design
Mario Verdicchio
André Rangel
Pablo Garcia
Thor Magnusson
Light Console Operation
Alice Black
ISBN
Ross Cathcart
978-989-746-066-1
Technical Supervision
Graeme Truslove
Special Thanks
William Latham
Supporters and Sponsors
University of the West of Scotland
Faculty of Fine Arts, University of Porto
Università degli Studi di Bergamo
Anhalt University of Applied Sciences
Centre for Contemporary Arts
Creative Futures
Rectory of the University of Porto
ID+
i2ADS
Fundação para a Ciência e Tecnologia
DST
This project is partially funded by FEDER
through the Operational Competitiveness
Program – COMPETE – and by national funds
through the Foundation for Science and
Technology – FCT – in the scope of projects
PEst-C/EAT/UI4057/2011 (FCOMP-Ol-0124FEDER-D22700) and PEst-OE/EAT/UI0622/2014.
Scientiic Committee
and Reviewers
Alessandro Ludovico
Chandler McWilliams
Neural / Academy of Art Carrara
UCLA
Alex McLean
Christian Faubel
Interdisciplinary Center for
Academy of Media Arts Cologne
Scientiic Research in Music,
University of Leeds
Cristina Sá
CITAR / School of the Arts,
Alice Eldridge
Portuguese Catholic University
University of Sussex
in Porto
Alison Clifford
Damian Stewart
University of the West of Scotland
Artist, Vienna
Álvaro Barbosa
Daniel Schorno
University of Saint Joseph, Macao
STEIM
André Rangel
Diemo Schwarz
CITAR / Portuguese Catholic
IRCAM
University
Francesca Pasquali
Andreas Muxel
University of Bergamo
Köln International School of Design,
University of Applied Sciences
Francisco Cardoso Lima
Cologne
Independent Artist, Aveiro
Andreas Zingerle
Graeme Truslove
University of Art and Design, Linz
University of the West of Scotland
Arne Eigenfeldt
Heitor Alvelos
Simon Fraser University
ID+ / Faculty of Fine Arts,
University of Porto
Bongkeum Jeong
m-iti — Madeira Interactive
João Cordeiro
Technologies Institute
CITAR, University of Saint Joseph,
Macao
Carlos Guedes
New York University Abu Dhabi
Jason Reizner
Miguel Carvalhais
Faculty of Computer Science and
ID+ / Faculty of Fine Arts,
Languages, Anhalt University of
University of Porto
Applied Sciences
Miguel Leal
Jon McCormack
i2ADS / Faculty of Fine Arts,
Monash University
University of Porto
Julio D’Escriván
Mitchell Whitelaw
University of Huddersield
Faculty of Arts and Design,
University of Canberra
Ken Neil
The Glasgow School of Art
Monty Adkins
University of Huddersield
LIA
Artist, Vienna / FH Joanneum, Graz
Nathan Wolek
Stetson University
Linda Kronman
Danube University Krems
Nicolas Makelberge
Symbio
Luís Gustavo Martins
CITAR / Portuguese Catholic
Pablo Garcia
University
School of the Art Institute of Chicago
Luísa Ribas
Paulo Ferreira Lopes
ID+ / Faculty of Fine Arts, University
Portuguese Catholic University,
of Lisbon
School of Arts
Manuela Naveau
Pedro Cardoso
Ars Electronica
ID+ / Faculty of Fine Arts,
University of Porto
Mario Verdicchio
University of Bergamo
Pedro Tudela
i2ADS / Faculty of Fine Arts,
Matthew Aylett
University of Porto
Royal Society Research Fellow,
School of Informatics, University of
Penousal Machado
Edinburgh
University of Coimbra
Philip Galanter
Texas A&M University
Roxanne Leitão
The Cultural Communication
and Computing Research Institute,
Shefield Hallam University
Rui Torres
Faculty of Human and Social
Sciences, University Fernando
Pessoa, Porto
Simone Ashby
m-iti — Madeira Interactive
Technologies Institute
Teresa Dillon
Independent Curator, Artist,
Research & Educator
Tim Boykett
Time’s Up
Titus von der Malsburg
University of Potsdam
Thor Magnusson
University of Sussex / ixi audio
Valentina Nisi
University of Madeira
Yolanda Vazquez-Alvarez
University of Glasgow
Contents
13
Foreword
Papers
16
Alessandro Ludovico:
Printed radicality
24
Gabriella Arrigoni & Tom Schoield:
Understanding artistic prototypes between activism and
research
40
Alessio Chierico:
Interpretation, representation, material properties: three
arguments about aesthetic qualities of computational media
52
Andreas Zingerle:
‘Lets talk business’ – Narratives used in email and phone scams
64
Emilia Sosnowska:
Digital Sensing - the multisensory qualities of Japanese
interactive art
79
André Rangel:
Intermedia, an updated vision in the early twenty-irst century
100
Luke Sturgeon & Shamik Ray:
Visualising Electromagnetic Fields: An approach to visual data
representation and the discussion of invisible phenomena
111
Mathias Müller, Thomas Gründer & Rainer Groh:
Data Exploration with Physical Metaphors using Elastic Displays
125
Rodrigo Hernández:
Modelling media, reality and thought: Epistemic consequences of
the information revolution
139
Miguel Carvalhais & Pedro Cardoso:
Beyond Vicarious Interactions: From theory of mind to theories
of systems in ergodic artefacts
151
Soia Romualdo:
Videogames and the Art World
168
Damián Keller, Evandro M. Miletto & Nuno Otero:
Creative Surrogates: Supporting Decision-Making in Ubiquitous
Musical Activities
184
Peter Beyls, Gilberto Bernardes & Marcelo Caetano:
The Emergence of Complex Behavior as an Organizational
Paradigm for Concatenative Sound Synthesis
200
Gordan Kreković & Antonio Pošćić:
Shaping microsound using physical gestures
213
Alex McLean:
Live coding collaboration
221
Katharina Vones:
Digital Symbiosis – The Aesthetics and Creation of StimulusReactive Jewellery with Smart Materials and Microelectronics
233
Hanna K. Schraffenberger & Edwin van der Heide:
Sonically Tangible Objects
249
Mark Hursty & Victoria Bradbury:
Making a Magic Lantern Horror Vacui Data Projector
259
Christian Faubel:
ZoOHPraxiscope, turning the overhead projector into a
cinematographic device
Short Papers
269
Nicole Koltick:
Accidental aesthetics: philosophies of the artiicial
276
Helen Richardson:
Training Performing Artists in the Digital Age
283
Christoph Theiler & Renate Pittroff:
Fluid Control – Media Evolution in Water
289
Raul Pinto, Paul Atkinson, Joaquim Vieira & Miguel
Carvalhais:
Designing with biological generative systems: choice by emotion
296
Olivier Houix, Frédéric Bevilacqua, Nicolas Misdariis,
Patrick Susini, Emmanuel Flety, Jules Françoise & Julien
Groboz:
Object with Multiple Sonic Affordances to Explore Gestural
Interactions
304
Paul Keir:
The Maximum Score in Super Don Quix-ote
Exhibition
311
Monty Adkins & Laurent Segretier:
Skylar and Bliss
312
Josh Booth:
Up Down Left Right
316
Paul Keir:
The Maximum Score in Super Don Quix-ote
318
Mathias Müller, Thomas Gründer & Rainer Groh:
Data Exploration with Physical Metaphors using Elastic Displays
320
Matt Roberts & Terri Witek:
Unknown Meetings
322
Brad Tober:
Colorigins: Algorithmically Transforming Subtractive Color
Theory Pedagogy
327
Alena Mésarošová & Ricardo Climent:
AR/VR_Putney 1.0 Interactive media composition as the
language and grammar for Extended Realities
334
Andreas Zingerle & Linda Kronman:
‘Lets talk business’ – an installation to explore online scam
narratives
339
Raul Pinto, Paul Atkinson, Joaquim Vieira & Miguel
Carvalhais:
Growing Objects: Testing with biological generative systems
Performances
344
Christoph Theiler, Renate Pittroff:
Fluid Control
346
James Wyness & Graeme Truslove:
Rikka
347
Jung In Jung, Dane Lukic & Stefanos Dimoulas:
Thermospheric Station
352
Thor Magnusson & Pete Furniss:
Fermata: Live Coding Performance
354
Ricardo Climent & Mark Pilkington:
Putney for game-audio
359
Ephraim Wegner:
Drei Mal Acht || 24
Algorave
363
Christian Faubel
367
Martin Zeilinger
369
Alex McLean
371
Shelly Knotts
373
Sam Aaron
Keynote
376
William Latham:
Evolution Art and Computers Now
379
Biographies
Foreword
Welcome to the proceedings of xCoAx 2015, the third edition of
the international conference on “Computation, Communication,
Aesthetics and X”. Continuing in the tradition of its predecessors,
xCoAx 2015 provided a forum for artists, musicians, scientists and
researchers to share and explore synergies and intersections in
the ields of computation, communication and aesthetics.
Operating from the interstices between human creativity and
rule-bound computational systems, artists, performers and
researchers presented their own exciting new proposals and ways
of expressing the unknown or the ‘x’ that underpins each xCoAx
event.
This year’s edition was held in Glasgow - a city boasting a
long and rich heritage in the arts and sciences - with the Centre
for Contemporary Arts (the CCA) acting as the main venue.
The conference consisted of paper sessions, a keynote address
delivered by pioneering computer artist Professor William Latham
(Goldsmiths, University of London), performances and social
events, an exhibition spread over two venues; not forgetting the
inal send off courtesy of the xCoAx algorave at the Glasgow School
of Art. The program included 87 participants from 14 countries.
We would like to thank them - authors, artists, musicians and
performers - and also the panel of reviewers from the scientiic
committee who worked hard to ensure and maintain the high
quality of works accepted to xCoAx.
We would also like to acknowledge the support received from
the following institutions without whom there wouldn’t have been
a conference: the Creative Futures Institute, the University of the
West of Scotland (UWS) - the School of Media, Culture and Society,
and the School of Computing and Engineering, the University of
Porto, the Portuguese Catholic University, ID+, i2ADS, CITAR,
the University of Bergamo, the Centre for Contemporary Arts,
Glasgow and the Glasgow School of Art. In addition to this, the
support received from an excellent group of volunteers helped
make this year’s event a great success.
Papers
Printed Radicality
Alessandro Ludovico
xCoAx 2015
Computation
Communication
Aesthetics
and X
Glasgow
Scotland
2015.xCoAx.org
Academy of Fine Arts, Carrara, Italy
[email protected]
Keywords: print, publishing, fake, library, digitalisation,
plagiarism, wikipedia
The static and unchangeable printed page seems to be hardly considered in years 2010s as a key tool for political and radical strategies, as human beings are constantly looking at a few personal
screen-based devices, most of them updated in real time. But
there are a few cultural elements in traditional media, which are
still playing a decisive role in the circulation of culture. Among
them the recognition of their aesthetic “forms,” even if digitised
in both design and content. The familiarity with those forms is
based on metabolised “interfaces” (we’re all culturally “natives”
when it comes to radio, tv, and print) that makes them almost
invisible, especially when translated for the digital realm, delivering the content in a more direct way. And since we recognise
those forms instinctively, we “trust” them, and so we trust their
content.
17
1 Newspaper as (fake) political imaginary
Fig. 1 Il Male, 1978, fake of “La
Repubblica” front-page
1 “The Most Common Fake
Historic Newspaper” http://www.
historicpages.com/lincfake.htm
2 “New York Times special edition |
The Yes Men” http://theyesmen.org/
hijinks/newyorktimes
The form of the newspaper is still one of the most recognisable.
What we can consider as the modern form of newspapers has only
slightly changed since the 19th Century (except for the inclusion
of pictures and colours), becoming a daily medium for quite a
few generations, establishing itself as an aesthetic standard and
a deined cultural object with its speciic interface. That’s why
artists and activists have often used newspapers as an identiiable information environment and a daily object at the same time.
From Andy Warhol’s “Headlines” series (Donovan 2011), with
huge reproductions of particularly dramatic front pages as frozen
in time, to “Modern History” series by Sarah Charlesworth (1979),
tracking the use of the same picture on different front pages. But a
speciic conceptual manipulation of newspapers (and the conventional ecosystem surrounding them) has been employed by artists and activists to foster speciic ideas. The “fake” newspaper, or
accurately reproducing a real newspaper arbitrarily changing its
content, has always been able to question the instinctive trust we
have in this medium. If making fake copies and freely distributing them in order to attract public’s attention (but then revealing
themselves as mere advertising lyers) is a remarkably old practice,
dating back to the end of 19th century1, the conscious use of those
fakes as a political medium is more recent. In this respect, there
are a few effective examples emerging especially in the 1970s. “Il
Male” (Sparagna 2000), for example, stemmed during the rise of
leftist political movements in Italy, and especially the “Creative
Autonomism” student movement in 1977. It conducted a few
campaigns through fake journalistic “scoops” (all being simultaneously plausible and surrealistic) rendered in major Italian
newspapers layouts and attached next to newsstands, generating
sometimes quite harsh reactions and a lot of discussions in the
streets. In the same years another two actions (oficially anonymous) were accomplished. In 1979 in Poland, a fake of the major
propaganda newspaper Trybuna Ludu was distributed during
Pope John Paul II (Karol Wojtyla)’s visit to his homeland, sporting the banner headline “Government Resigns, Wojtyla Crowned
King.” (Sparagna 2000) And in France, in 1977, a fake Le Monde
Diplomatique was anonymously distributed to a certain number
of subscribers, featuring very satirical comments on the Rote
Armee Fraktion’s Stammheim Prison bloodbath (Alferj, 1979).
Thirty years later an impressive fake newspaper distributed in
several thousand copies invaded the streets of New York City, on
November 12, 2008: “The New York Times special edition” by The
Yes Men in collaboration with Steve Lambert and The Anti-Advertising Agency, and anonymously sponsored.2 It was set in the
18
3 “inVeritas - Paolo Cirio Contemporary Artist”
http://www.paolocirio.net/work/
inveritas/inveritas.php
4 “Liberals buy front-page newspaper
ad touting debate win - British
Columbia - CBC News” last modiied
May 01, 2013, http://www.cbc.ca/
news/canada/british-columbia/
story/2013/05/01/bc-liberalsnewspaper-ad.html
Fig. 2 “inVeritas” Paolo Cirio, 2011
5 “‘Fake newspapers’ network
dismantled in Moldova - World - on
B92.net” last modiied May 17, 2011,
http://www.b92.net/eng/news/world.
php?yyyy=2011&mm=05&dd
=17&nav_id=74384
near future (July 4, 2009), featuring only positive news, briely
plausible after Barack Obama’s election as U.S. President. The
New York Times layout, fonts and graphic design were painstakingly reproduced (including the usual advertisements, satirically
changed as well), so the majority of the public was easily fooled. A
large network of volunteers distributed it for free in the city, even
in front of the New York Times headquarters, without any legal
repercussion. What was embodied here was the public imaginary,
the articulated hope this historical event generated, historicised
then altogether in a stable and recognisable format, without the
daily compromises of major media. The group produced another
few fakes, one of them in the form of the International Herald Tribune. Italian artist Paolo Cirio, instead, made a project composed
by a web application, a workshop and an action in 2011, called
inVeritas, It is centred on Italian newspapers, inviting people to
invent their own story that can be composed as a headline sheet
with the newspaper logo of choice, through the project’s website.
Then it’s fairly easy to print it out and attach it (during the night)
close to local newsstands.3 The use of fake newspapers in political
campaigns has proven not to be a thing of the past. The classic
strategy of purchasing a full front page ad, designed to look just
like the real front page has been used many times. The Liberal
Party in British Columbia did it in 2013, disguising the ad as “oficial” information, and so generating a whole national media case
with polarised reactions about the Party ethics and the high risk
of misleading the readers.4 Even more, in 2011 there was a more
direct political newspaper scam, when police identiied a network
of infringers who had been illegally producing and distributing
fake copies of Ziarul de Garda and Timpul, two of Moldova’s leading newspapers, trying to manipulate the public opinion ahead of
elections by publishing negative articles about the pro-Western
ruling coalition.5
19
2 Plagiarism
(from print to digital and vice-versa)
Newspaper fakes incorporate some forms of “plagiarism”, mostly
related to misusing a “standardised” visual form. This has been
technically feasible since the mechanical reproduction of print,
and even more with the lightning-fast speed and accuracy of digital (re)production. But the plagiarism of content is much older, and
the very concept of plagiarism dates back to the Roman Empire. It
was used for the irst time by Roman poet Martial, complaining
that another poet was “kidnapping” his verses, so he called him
“plagiarius”, which literally means “kidnapper.” These were the
verses he used to express his feelings:
Fama refert nostros te, Fidentine, libellos
non aliter populo quam recitare tuos.
si mea vis dici, gratis tibi carmina mittam:
si dici tua vis, hoc eme, ne mea sint.
(Fame has it that you, Fidentinus,
recite my books to the crowd as if none other than your own.
If you’re willing that they be called mine, I’ll send you the poems for free.
If you want them to be called yours, buy this one, so that they won’t be
mine.) (Lynch 2002)
6 “La carte ou le territoire | Espace
Virtuel” http://espacevirtuel.
jeudepaume.org/la-carte-ou-leterritoire-1834/
There are plenty of more or less famous cases of literary plagiarism in history, but only some of them were publicly admitted (like
the script of the TV series Roots, admittedly plagiarised by his
author in some passages from the novel “The African,” published
nine years before). In contemporaneity, plagiarism seems easier
than ever, especially taking advantage from “big data” sophisticated sources like Wikipedia, and so a few critical artworks have
been developed consequently. Belgian artist Stéphanie Vilayphiou investigates how free software can deeply question the ixity of the printed page once it’s digitised, and how the defensive
copyright practices, historically consolidated, can be challenged.
In particular she writes various transformative software to create
controversial versions of literature classics. Speciically, in her net
art piece “La carte ou le territoire (The map or the territory)”6
she selected a controversial book, Michel Houellebecq’s “The
map and the territory”, which became renown and discussed in
France for its evident quotes from Wikipedia, never acknowledged
by the author nor by the publisher. She retrieved the book’s digitised text and then wrote a software ilter which parses it in sentences (or part of it) looking for them in the millions of digitised
texts contained in Google Books, eventually inding the same
20
sequence of words in any other books. The results are rendered
then in their original typefaces, and the parts matching Houellebecq’s book are highlighted in yellow. Visually the book is entirely
transformed in a sequential digital collage of quotations (whose
original authoritative printed context is still maintained in the
background), deinitively loosing even the last bit of originality. Vilayphiou ultimately questions originality and authorship
through software automatisms, turning them into trackable and
technically demonstrable collective thinking. Another example
of artistic practice deliberately using other people’s writings in a
speciic context is Traumawien’s “Ghostwriter”7 series. The Viennese group performed a virtual “action” with their own software
robots compiling and uploading hundreds of e-books on Amazon.
com with text directly stolen from YouTube videos’ comments, as
if they were abstract dialogues. They have deined it an “auto-cannibalistic” model, and these e-books sport a very classic paperback layout as spontaneous instant books, redirecting the endless
low of comments in a speciic form and freezing them in time.
This action is obviously re-contextualising the original meanings,
setting them in a new scenario and in a new literary form: from
personal notes not necessarily relating each other, into a single
continuous and sometimes surreal dialogue. What happens in the
passage from one medium to another, is that the original spontaneousness and sometimes naïveté of the text once rendered as
Fig. 3 Stéphanie Vilayphiou “La carte
ou le territoire” screenshot, 2012
7 “GHOSTWRITERS «
TRAUMAWIEN” http://traumawien.
at/prints/ghostwriters
21
a book assumes the formal character of the adopted layout. The
paradigm of access to “big data” is embedded in practices like the
above mentioned, and the software programmer’s vision is the
only limit to what kind of results and new (digital and printed)
forms can consequently be created.
3 Printing as a risky strategy
In the end of years 2000s there has been a few famous and dramatic cases of sensitive information leaks: Wikileaks and its small
galaxy of information-wants-to-be-free “heroes” (Julian Assange,
Bradley Manning, Edward Snowden) publishing secret or classiied information from anonymous sources, and Aaron Swartz and
his brave act of freeing the copyrighted academic knowledge of
JSTOR (Nelson 2013) (Swartz committed suicide in 2013). The
leaks’ transmission and acquisition have been totally digital, but
then traditional media have been deeply involved to make this
information “public” (and implicitly to somehow certify the scale
of the action with their innate “authority”,) including printed ones,
mainly newspapers. At a smaller scale, there are other cases using
print as a tool for liberating secret information. Carl Malamud, for
example, an activist dealing with the fact that vital parts of US
law are secret and that you’re allowed to read them only paying
a quite high amount of money, has founded Public.Resource.Org8
organisation, which digitises, and eventually re-publishes public domain materials. He has scanned, OCRed and re-published
in print, codes like the “Public safety codes of California” or the
“District of Columbia Oficial Code” including in the print a statement that says “being law, any claim about their copyright by the
authorities is “null and void.”” Answering the question “why print
copies?” Malamud says that the print edition limits distribution
with no “side effect of ininite copy” that scares standard and
legal people, so making his efforts somehow still acceptable. In
this case print is turned into a legally strategic medium of distribution, because of its slow duplication standards, as newspapers
have been equally strategic for Wikileaks, being part of a clever
tactic that considers the different role and weight of the respective medium, in order to seamlessly accomplish an effective distribution of the content.
8 “Public.Resource.org”
https://public.resource.org/
9 “Google and the world brain
- Polar Star Films - The most
ambitious project ever conceived
on the Internet” http://www.
worldbraintheilm.com
4 The library, ultimate cultural centre
vs. big data repository
“(Libraries) are nerve centres of intellectual energy [...] knowledge is
power [...] and that power should be disseminated and not centralised.”
Robert Darnton, Harvard University library director 9
22
10 “Take Your Seashells Out of Your
Ears! » No Media Kings” http://
nomediakings.org/games/take-yourseashells-out-of-your-ears.html
11 “Welcome to BookCrossing”
http://www.bookcrossing.com/
12 “Welcome - Little Free Library”
http://www.littlefreelibrary.org
13 “Ourshelves” http://ourshelves.org
14 “Citizen Libraries Are The New
Home For The Printed Word |
Co.Exist | ideas + impact”
http://www.fastcoexist.
com/1681736/citizen-libraries-arethe-new-home-for-the-printedword?utm_medium=referral&utm_
source=pulsenews
15 “Template:Library resources box Wikipedia, the free encyclopedia”
http://en.wikipedia.org/wiki/
Template:Library_resources_box
The physical library is one of the crucial spaces where the discourse about the new relationships between traditional and digital publishing is taking place. On one side the “global virtual
library” is closer than ever with Google investments worth millions of dollars to digitise millions of books, and with plenty of
other similar efforts at a different scale, including some remarkably vast, independent and shared. On the other end the physical library’s historically values as meeting and research space for
citizens are simultaneously reclaimed and challenged. Funding
cuts and innuendos about its obsolescence in the digital era, are
dramatically permeating both common sense and institutional
policies. Some libraries are reinforcing their role through digital initiatives, like the Toronto Public Library, which launched
a Fahrenheit 451-themed alternate reality game, where people
where invited to play it in the city through telephone calls with
the motto “Join the literary resistance.”10
And the push on libraries to “reinvent themselves” can effectively be rethought taking the exchange of physical books as a
starting point to expand the knowledge in new directions, creating less conventional models for that. So beyond platforms like
BookCrossing,11 using a web-based platform and a simple social
mechanism to share books in public places, the main question
seems to be about which social role the exchange of knowledge
can implement. For example there are different efforts in building what could be deined as “spontaneous citizen library”. There
are attempts on a small scale like the Little Free Libraries,12 a few
thousands wood boxes scattered around the world where people
can take or leave books, or Ourshelves,13 a San Francisco lending
library open to everyone, with almost 300 members and 3.000 volumes, built around its community, planning to replicate around
the city.14 And if we take into account that Wikipedia has speciic
templates to add information in its pages about the availability
of related content in local libraries,15 spontaneous social mechanisms connected to a self-managed physical exchange can be
easily enabled. These kind of initiatives can question the library
as a centralised facility, reconiguring it as the outcome of a community, opening new possibilities. Teaching how to digitise books,
for example, could dramatically expand access, especially to forgotten titles which Google Books won’t include or give access to
for different reasons. Then involved people should assume their
own responsibility in scanning and sharing, on a personal and
independent level in building their own cultural history, preserving (physically) and sharing (digitally) all the knowledge that they
think it’s worth, as it has been done with music since the early
2000s.
23
Conclusions
The historical importance of the printed page as a medium has
still a great inluence in cultural dynamics, and it can be used to
trigger innovative and radical processes when approached with
the new opportunities offered by digital technologies. Active
and critical strategies can be then developed using the combined
qualities of those two media. The most effective radical efforts
has been historically supported by an innovative use of media
and technologies, which has grounded the vision of new social
and cultural models. The re-appropriation of public imaginary
through printed fakes, the plagiarised use of online content on
print, the ability to create social libraries, and the sharing of digitalised content, can structurally redeine the printed medium,
turning it into a crucial opportunity to rethink our relationship
with knowledge, both in contemporary and historical perspective.
References:
Alferj, Pasquale and Mazzone, Giacomo. I Fiori di Gutenberg. Roma
ITALY: Arcana, 1979.
Charlesworth, Sarah: Modern History. Edinburgh UK: New 57 Gallery,
1979
Donovan, Molly, and Warhol Andy, and Curley John J. Warhol.
Headlines. New York NY: Prestel, 2011.
Lynch, Jack. The Perfectly Acceptable Practice of Literary Theft:
Plagiarism, Copyright, and the Eighteenth Century, in Colonial
Williamsburg: The Journal of the Colonial Williamsburg Foundation
24, no. 4 (Winter 2002–3), pp. 51–54. Williamsburg, VA: The Colonial
Williamsburg Foundation, 2002
Nelson, Valerie J. Aaron Swartz dies at 26; Internet folk hero founded
Reddit. Los Angeles LA: in Los Angeles Times, 12 January 2013
Sparagna, Vincenzo. Falsi da ridere. Dal Male a Frigidaire, dalla Pravda
a Stella Rossa, dal Corriere all’Unità, da Repubblica al Lunedì della
Repubblica. Roma ITALY: Malatempora, 2000
Understanding Artistic
Prototypes Between
Activism and Research
xCoAx 2015
Computation
Communication
Aesthetics
and X
Glasgow
Scotland
2015.xCoAx.org
Gabriella Arrigoni
Fine Art and Culture Lab (Newcastle University),
Newcastle upon Tyne, UK
[email protected]
Tom Schoield
Digital Interaction at Culture Lab (Newcastle University),
Newcastle upon Tyne, UK
[email protected]
Keywords: prototype, lab, practice-based research, activism.
The paper explores the concept of artistic prototypes to analyse
a strand of new media art generated within research or activist
contexts. Two key features of a framework for artistic prototypes,
openness and ictionality, are explored through the discussion
of two artworks which embody a sense of prototypicality. The
contingent, situated interpretation of knowledge emerging from
creative practice-based research is associated to the instability of
prototypes proposed as a paradigmatic object for experimentation.
25
1 Introduction
Historical and interpretative approaches to new media art have
focused on its immaterial, networked nature, and its problematic itting with traditional museum settings and preservation
standards (Dietz 1999; Banovic et al. 2002; Krysa 2006; Paul
2008; Graham & Cook 2010; Graham 2014). Current tendencies
in technological development however suggest more hybrid and
integrated forms of materiality and immateriality, with digital
devices embedded in physical objects or disseminated across the
environment, rather than conined to screen-based interfaces
(Weiser 1991; Gershenfeld et al. 2004). Such vision towards ubiquitous computing embraces (or questions) ambient technologies
that disappear in the background but seamlessly pervade the
environment, becoming more human or calmer (Weiser & Brown
1997; Greenield 2010; Dourish & Bell 2011). New paradigms of
understanding account for this shift from a cultural perspective:
the notion of post-digital refers to a complete blending between
analogue and digital, a dimension where digital technologies are
no longer a revolution (Negroponte 1998), but a fully assimilated
factor by now treated as a given (Cramer 2014).
A particular strand of new media art is engaging with technology to question patterns of innovation, or tell stories about possible futures. These practices are often generated within research
environments, in the context of practice-based research (PbR)
or research through design; or from activist and collaborative
approaches typical of media-labs, hack-labs and makerspaces.
These works use coding to make physical objects perform in
determined ways, and adopt processes close to interaction design.
It is actually possible to describe an overlap across art and design,
with tendencies like critical and speculative design (CSD) (Dunne
2008) appropriating artistic languages and channels of dissemination, and artists, on the other end, adopting design methods to
make artworks. Despite Dunne and Raby’s assertion that Critical
Design is not art (Dunne & Raby 2007), strong parallels exist with
both attempting to inject an element of the critical into the everyday. The main feature that this strand of new media art is borrowing from design is the practice of prototyping or the tendency
to present the artwork as a prototype: an invented, innovative
device introduced to the public more like a proposal for further
development to be used or manipulated, than as a unique, stable
piece to be contemplated.
Prototypes are commonplace in research because of the way
they afford an analysis of the making process and suggest new
ields of exploration. Within activist approaches, they are created
to demonstrate how social/political/economic change is possible
26
and support innovative practices inspired by values such as freedom (of speech, information), equality, sharing and communitarianism, anti-consumerism and environmentalism. This paper
compares literature on prototypes with two artworks to explain
why it is possible to deine them as prototypical, and disclose their
relationship with research and activism. Subsequently, it suggests
a framework to understand the behavior of artistic prototypes,
with the aim to support further work by curators, museum practitioners and theorists in conceptualising and mediating these
works to the public.
2 The Prototype: What Is It?
The most common deinition of the term prototype relates to the
design process: it tangibly manifests an idea to test possibilities
and share it with stakeholders (managers, collaborators, perspective users). In fact it is an idea. Prototyping has been described as
a way of thinking and learning by doing, integrating relection
and evaluation with the practice of making and performing in the
real world (Hartmann et al. 2006, p.299). In manufacturing, prototypes are used to assess technical feasibility, aesthetic issues,
usability or experience (Visser 2014, p.5); they can address different audiences depending on purpose: to externalise and develop
an idea, to promote the project within the organisation, to evaluate user experience, or its potential success on the market. Consequently it can be high or low idelity and made in different media
(Rudd et al. 1996).
Prototyping has been described as crucial to innovation
(Schrage 1993; Kelley 2001), especially because of its persuasive
power. The material manifestation of an idea can be more convincing than verbal or written accounts, and articulate the complexity of the context in which the new product might become
desirable, or the problems it might create. Indeed prototypes are
common ways to materialise visions for the future, catalyse creativity (Carleton & Cockayne 2009) and quickly generate new avenues of development (Briscoe & Mulligan 2014; Rosell et al. 2014).
Finally, prototypes have a special role within activist paradigms
of open and grassroots innovation, when making becomes a pathway for collaboration and to negotiate the value of emerging technologies in bottom-up dimensions of co-creations (Chesbrough
2003; Kera 2001; Kera 2013).
Prototypes are strongly suitable to support cooperation and
collaboration within an organisation (Schrage 1993) or through
networked communities. They can elicit discussion, facilitate
the comparison of different perspectives, and contribute to the
articulation and sharing of knowledge around a project. For this
27
mediating role they have been interpreted as boundary objects
(Rhinow et al. 2012; Subrahmanian et al. 2003). Additionally, they
are key in participatory design methods to encourage contributions from participants (Bødker & Grønbæk 1991; Greenbaum &
Kyng 1991; Vines et al. 2013). These characteristics of prototypes
make them ideal instigators in activist processes of change, both
proposing viable alternatives to the status quo, and enabling the
diffusion of such alternatives through co-creation.
Prototypes can also function as critical artefacts: rather than
early versions of products, they are provocative objects able to
open up new directions or ield of exploration for design; instigate
debate; support an investigation on people’s values and attitudes
(Dunne & Gaver 1997; Bowen 2007; Gaver et al. 2008). Therefore
they can be used for advocacy and to subvert a passive acceptance
of the status quo. SCD artefacts belong to this category, and are
sometimes presented alongside narrative elements and scenarios
to depict their imaginary settings.
This brief overview highlighted a variety of ways of describing
prototypes and their range of functions: representations, manifestations or mediations of ideas; tools for sharing, collaborating,
communicating, testing; embodiments of arguments and visions;
props for action or discussion. Such an assemblage of heterogeneous meanings can be useful when compared with existing artworks that present some of the above characteristics. The next
section analyses two examples in this light, as a prelude to a more
general understanding of artistic prototypes.
3 Artistic Prototypes
3.1 Sentient City Survival Kit
The Sentient City Survival Kit (SCSK) by artist and architect Mark
Shepard consists of a series of devices conceived to bypass various
forms of surveillance in near-future cities dominated by ubiquitous computing. The aim is to question the paradigm of a responsive urban environment disseminated with information systems
and to raise awareness on the possible consequences for social
and cultural life, privacy and trust (Shepard 2010).
The kit contains four artefacts. The Serendipitor is an alternative
navigation system opposing the logic of eficiency guaranteed by
common navigators, to reintroduce detours, unexpected encounters and serendipity. Under(a)ware is a line of underwear able to
sense Radio Frequency Identiication (RFID) Tag readers and
alert the wearer of their presence with a small vibration. The Ad
Hoc Network Travel Mug creates free networks of communication
28
hidden to any monitoring system. The CCD-me-not Umbrella’s
infrared LEDs let the user play and bewilder surveillance cameras.
In a paper presented at the Digital Arts and Culture Conference, Shepard explicitly refers to these artefacts as prototypes
and describes them as the main vehicle to disseminate the project
in museums, art festivals and public lectures (Shepard 2009, p.5).
Additionally, a dedicated website offers DIY tutorials to engage
the public in building the kit: these include source code, circuit
diagrams and parts list, released under a Creative Commons
License. The intention is not just an alignment with an open
source attitude, but also that this artistic project be replicated,
multiplied and used.
Shepard mentions critical design as a method, and shares with it
the goal of generating “discussion around just what kind of future
we might want” (ibid. 2009, p.2). The kit however is presented as
an artwork, rather than a design project uncovering forthcoming technological trends. As opposed to positions “casting art in
a reactionary role vis-à-vis technological development”, Shepard wants to explore new roles for the artist “in shaping how we
inhabit the near-future Sentient City” (ibid. 2009, p.5). The prototypes are framed as archaeological traces of the future, demanding interpretation and questions about proximal socio-cultural
developments “to instigate the process of imagining a future city
and its inhabitants through fragments and traces of a society yet
to exist” (ibid. 2009, p.5).
Grounded on current orientations of R&D labs in urban computing and ambient informatics, the SCSK is rooted in a research
framework and, through its prototypes, generates new knowledge (forms of conceptualisations and problematisation of an
issue), that inds dissemination through typical academic channels such as lectures and conferences, alongside artistic channels
(exhibitions and festivals). Like classic design prototypes, it proposes a set of innovative devices that might be associated with
new social practices, but it simultaneously elicits discussion and
critical exploration. Finally, thanks to its open source logic, the
project presupposes collaboration and, potentially, multiplication,
implementation or modiication of the prototypes. The activist
perspective is rather implicit, but it can be identiied with the
intent of anticipating change, and inspiring actions of resistance
towards imposed technological paradigms (control, surveillance).
The next example instead shows a stronger activism take, and a
less evident link to research.
29
3.2 Re:Farm the City
Re:Farm the city is an ongoing project initiated by a collective
formed across Barcelona’s Hangar Media Lab and Madrid’s Medialab Prado, with the lead of Hernani Dias. It consists of a set of
open source tools (hardware and software) to develop sustainable,
small scale urban agriculture. These include farm containers
(mobile planters, of various dimension, for indoor or outdoor use),
watering systems connected with monitoring systems, compost
mixers, a web interface for managing the farm at distance, and
bike powered water pumps or generators. The initiative has now
reached various cities across the world where new participants
have embraced the project, adopting, customising or adding new
tools. Part of the tutorials is available to everyone on the blog and
wiki. Dias however tends to privilege the formation of small communities built through workshops he runs when invited by artistic institutions, so that a more direct exchange can take place,
and the expansion of the project can be more easily documented
(Dias 2013).
The tools are prototypes combining sensors, electronics and
recycled material aimed at generating new everyday practices and
impacting the real world. Their functioning is not always guaranteed; rather they are unstable and open to implementation and customisation according to speciic local conditions, including climate,
cuisine and biodiversity. This makes of Re:Farm an exportable,
adaptable model to support local production, conceived to turn
much of the city’s own recycled trash into a resource (2013, p.ibid.).
Similarly to SCSK, this work comprises a set of digital devices
embodying a proposal for new practices; expects to be shared,
appropriated, used and transformed by other contributors; adopts
technology to suggest alternative views and critique established
ones. Artistic approaches and channels of disseminations are
integrated within a process of grassroots innovation supported
by typically activist values such as respect for the environment,
resistance to consumerist cultures, communitarianism and
localism. The project has also been presented through talks and
conferences (Calvillo et al. 2010), and its development required
combining existing knowledge (from math to biology and the
mechanics of luids) into something new and transferable. This
transferability is what differentiates it from more traditional artworks usually expressing the unique talent of the artist. The artefacts created through Re:farm are conceived to be as easily replicated as possible.
30
4 Between Activism and Research
The existence of a strong tie between new media art and research
has been recognised in recent literature that emphasises the
experimental approach of artist-technologists (Gere 2010) and
the way they share similar channels of dissemination and reward
with academics, diverging instead from the logics of the gallery
and the art market (Scrivener & Clements 2010). The prototypical nature of a signiicant number of artistic works generated
as research however has been neglected in discourses of media
art and only mentioned by proponents of PbR in the arts as one
of its key physical outcomes. In this context, prototyping has a
crucial role because it its with a cycle of trial, analysis, implementation and evaluation, usually adopted by researchers to combine theory and practice (Winter & Brabazon 2010, p.5; Edmonds
& Candy 2010). Here, building artefacts is regarded as the main
site of knowledge production, while the identity between maker
and researcher is recognised as PbR’s deining quality (Coessens
et al. 2009; Borgdorff 2011). Accordingly, prototypes are particularly suitable to scenarios where we wish to manifest, visualise
and analyse the making process. Because artefact and research
development are constantly affecting each other, prototypes are
a natural outcome of artistic research. Furthermore, in virtue of
their openness to transformation and their uninished dimension,
they are well placed to encourage feedback and elicit responses
from users/audiences when the research goals concern aspects
of public experience (Muller 2008; Chatting 2014). Finally, prototypes allow hypotheses to be explored and tested in tangible ways,
opening up new ields of research or creative possibilities.
The nature of knowledge in artistic research has been at the
centre of passionate debates and, in the attempt to establish its
position alongside traditional academic standards, redeined as
situated, contingent, embodied, experiential and tacit or non-conceptual (Sutherland & Acord 2006; Knowles & Cole 2007; Barrett
& Bolt 2010; Borgdorff 2011). These approaches to ‘knowing’ seen
as an action rather than a static entity all share an awareness of
the intrinsic dynamism of material and social situations in which
artefacts come to exist. We suggest that such an emerging conceptualisation of knowledge comes together with the provisionality and instability typical of prototypes.
The second context where artistic prototypes are thriving
can be identiied with media-labs and makerspaces. These environments embrace the processual and collaborative dimension
of new media art (Graham & Cook 2010, chap.4) by supporting
hybrid platforms for activities such as workshops, presentations
of work in progress, festivals, conferences or hackathons. Even
31
though some media-labs, such as MediaLab Prado or Ars Electronica Futurelab, have become key references in the media art
scene, most of them develop identities less focused on art in a
narrow sense, and more on production and intervention. Social
empowerment, environmental issues and participation are generally high on makers’ agenda (Yair 2010), and prototyping results
as an ideal practice to support these goals. Labs recognise access
and engagement with emerging technologies as an essential step
to enable citizens in understanding and negotiating otherwise
top-down innovation paths (Kera 2013). Speciic programmes are
devised for the inclusion of marginalised groups such as NEETs,
homeless people or women (Frost 2012). Even if deployed in small
scale projects, they can demonstrate the potential of alternative
approaches and contribute to change accustomed mind-sets or
challenge traditional production and distribution systems (Yair
2010, p.3). Prototypes are usually the vehicle of such endeavors,
because of their capacity to materialise vision and demonstrate
inventive and sustainable possibilities that can be easily built and
tested in small communities. Indeed, prototyping resources such
as 3D printers, microcontrollers and laser cutters are among the
most common items populating fablabs (the area of media-labs
devoted to fabrication).
Workshops and hackathons are the most typical event-formats in labs. They engender opportunities to collaboratively and
informally work around creative ideas, and generate prototypes
thanks to their intense and concentrated structure (Seravalli
2013; Briscoe & Mulligan 2014). The open source ethos and the
preference towards recycling that commonly inform media-labs
(Frost 2012) is another relevant factor leading to the production
of prototypes. Both attitudes imply that objects have an expanded
lifecycle and are constantly subject to transformation from a distributed network of users/makers. Prototyping is seen as an agent
of change, and connected to an activist mindset that opposes
consumerism, encourages exchange, cooperation and sustainable,
scalable solutions.
5 A Theoretical Framework
The examples reported demonstrate a range of speciic characteristics of artistic prototypes that serve either research or activist
purposes. In previous unpublished work we have described a conceptual framework for understanding the behavior of artistic prototypes. ‘Openness’ and ‘ictionality’ are identiied as their key
features and related to research and activism as their main areas
of application (Fig.1). The framework also articulates how ‘openness’ and ‘ictionality’ support further facets of prototypicality,
32
namely generativeness, participation, critique and testing. The
scope of this paper is however only focusing on a speciic part of
the framework; in the next paragraphs we will demonstrate that
‘openness’ and ‘ictionality’ are strongly compatible with activist and research functions of prototypical artworks respectively.
Beginning by identifying some ways that the examples described
embody these facets we will continue by suggesting how a conscious adoption of these aspects of our framework can support
artistic prototypes as both kinds of research and modes of activism in the future.
Fig. 1 The artistic prototypes
framework.
5.1 Openness
Both SCSK and Re:Farm imply a potential towards their own
expansion, appropriation and modiication. This can be understood as a form of openness of the prototype. Prototypes are open
because they are unstable, provisional, not deinitive, unixed,
prone to transformation and re-deinition, situated in a dynamic
life-cycle, in between made and un-made. Both this instability
and the reliance on external inluences to determine its performance relate the openness of prototypes to activist modalities.
Openness can be found at different levels. Technological iteration
‘opens’ the prototype to new functionality and consequently new
applications. Openness to interpretation not only relates to polysemy and subjectivity (as in Eco’s theorisation of The Open Work
1989), but also provokes consideration of the ways prototypes
connect to practices, values and cultural systems. These associations between objects and contexts are not established permanently, but evolve through time, so that the same device becomes
potentially integrated into very different practices. Finally, multiple and differentiated versions of a prototype can be made, on the
basis of shared instructions. This is also associated with a participatory dimension, where interventions are coming from a broad
community of local or networked collaborators. Phenomena such
33
as Open Innovation (Chesbrough 2003) and Open Design (van
Abel et al. 2014) are based on a similar principle.
It is notable that the open aspect of protypical artworks is foregrounded in many activist artworks. This is achieved principally
through the production of workshops and through the release of
code resources or kits of parts, as described earlier (or in examples
such as Loenen 2013; Dentaku 2014). By stressing participation
both during an art event (such as an exhibition) and afterwards –
as others take forward and develop the work further, artists use
the openness of prototypes as catalysts to support a particular
vision of participation and a politics of self empowerment: learn
to code and gain agency in the (techno-political) world. The activism embodied hitherto in artistic prototypes has the lavor of an
alternative techno-utopianism e.g. as described in (Oliver et al.
2011). Code however is not the only site and means of participating
in the transformative process around a prototype, as modiication
and personalisation can also concern other levels of intervention
(such as the aesthetic and formal level, or the use and context of
adoption). The simple replication and adoption of a prototype as it
is released in the public realm is also a way of generating change,
by disseminating a new kind of practice or behavior.
We are sympathetic with the political will expressed in such
work but note that to achieve its goals requires a very signiicant proportion of a community to engage, develop and own it.
There are some distinct technical devices through which activist-friendly kinds of openness can, we feel, be encouraged. Open
source code repositories such as those hosted on Github (Dabbish
et al. 2012) provide an appropriate analogy for the success of open,
activist, artistic prototypes. We deine success here as the degree
to which the prototype has become an active agent for change,
adopted and adapted by many and put to diverse uses. In open
source code repositories contribution comes in two main forms;
an addition to the main development strand or a ‘fork’ which
effectively splits the development of the code into two diverging
directions (which in turn can be subdivided further in the future).
There is nothing inherently better or worse about forking or contributing but the later strengthens and develops the code in tune
with a core ethos sometimes explicitly agreed among developers
while the former diversiies and pluralises what the project is or
can be. Returning to artistic prototypes, we point out that often,
little strategy exists for tracking, consolidating and mutually supporting the future iterations of work, all of which might support
better its activist aims. Outside of the world of software development we point to a need to manage, identify and coordinate further
development of artistic prototypes. Not only will this strengthen
and pluralise their development but will also contribute to their
34
relationship with knowledge and research by allowing for comparisons and cross-referencing of projects in different contexts.
5.2 Fictionality
The ictionality of artistic prototypes also assumes a variety of
forms. In our examples SCSK explicitly suggests a near future
scenario, developed on the basis of current socio-technological
tendencies. By contrast Re:farm suggests a more subtle (and less
futuristic) narrative, letting people imagining how the urban environment might be should the project become commonplace. We
propose a very broad deinition of ‘iction’, which includes hypothetical cultural systems and associated values, practices, scenarios, behaviors, and any non-actual but plausible element that can
be associated to the way the prototype is used or interpreted.
The ictional layer can be directly provided by the artist
through supporting information, documentation and materials;
or manifested through ambiguous objects demanding the viewer
to imagine possible scenarios to which they might belong. Such
strategies support PbR by providing avenues for understanding
audiences’ responses to artworks, for pluralizing their message
or indeed for helping the artist to develop them in new directions.
Fictionality is compatible with research also because of its critical and speculative facets. Artefacts presented as embedded in
imagined but plausible situations materialise an alternative world
that makes them a prompt for critique, relection and debate
(Dunne & Raby 2010). In critical design the subjects of critique
are often innovation, consumer culture, assumptions and ideologies embedded in products. Prototypes support research aims by
demonstrating the feasibility or desirability of innovative technologies (Kirby 2009), testing and evaluating their implications
on society (Bleecker 2009), or assessing the responses they might
elicit in the public (Beaver et al. 2009). Similarly in artistic prototypes iction becomes an environment where hypotheses can
be developed, explored and made tangible. In artistic prototypes
iction and reality are never mutually exclusive, but maintain a
strong tie, as the engagement of the viewer is rooted in their complementary relationship. Prototypes’ ictionality is grounded in
the artefacts’ material presence and in their scientiic or technological background. This is directed at generating in the public a
sense that such artefacts can be related and integrated in their
everyday lives. Thus, research is enacted through iction because
of an explicit commitment to testing, hypothesis and experimentation on human attitudes and behaviors.
35
6 Discussion and conclusion
Artistic prototypes are interpreted as a key object emerging from
research approaches based on practice and are associated to contingent and transitional deinitions of knowledge. Their role in
research relates to the way they enable us to investigate the making process, provoke responses in and feedback from the public
and provide a tangible environment to trial hypotheses. These
potentials are particularly supported by the ictional character of prototypes, especially when involving participants in the
study. Openness by contrast is more strongly related to an activist
dimension. Its participatory and generative potentialities in fact
directly link to grassroots initiatives and to the search for sustainable and ethical alternatives to established patterns of manufacture and distribution.
This distinction is tentative and provisional but intends to support and enrich the vocabulary for further discussion. The concept
itself of artistic prototypes is also inherently porous since there
are no conclusive and unequivocal criteria to distinguish it from
non-artistic prototypes. Rather we point to a ‘family resemblance’
(Wittgenstein 1953) between such works. Aims and contexts in
which a project is developed can contribute to the deinition of
an artistic prototype. Ultimately, it is distinctive of artistic prototypes to be valued regardless of their following developments,
whereas other prototypes’ value relate to the expectation of a
closure, a resolved version even if that resolution is subsequently
undone. Nevertheless, we believe that this framework could be a
valuable starting point to identify the emerging concept of the
artistic prototype and initiate a debate around its behavior and
positioning in contexts where new media art is developing and
inding applications beyond traditional artistic environments.
References
Banovic, T. et al. eds., 2002. Curating New Media - Third Baltic
International Seminar, Gateshead: Baltic.
Barrett, E. & Bolt, B. eds., 2010. Practice as research: approaches to
creative arts enquiry, Ib Tauris.
Bødker, S. & Grønbæk, K., 1991. Cooperative prototyping: users and
designers in mutual activity. International Journal of Man-Machine
Studies, 34(3), pp.453–478.
Borgdorff, H., 2011. The production of knowledge in artistic research. In
M. Biggs & H. Karlsson, eds. The Routledge Companion to Research in the
Arts. London and New York: Routledge, pp. 44–63.
36
Bowen, S., 2007. Crazy ideas or creative probes?: presenting critical
artefacts to stakeholders to develop innovative product ideas. In
Proceedings of EAD07: Dancing with Disorder: Design, Discourse and
Disaster. Izmir.
Briscoe, G. & Mulligan, C., 2014. Digital Innovation : The Hackathon
Phenomenon,
Calvillo, N. et al., 2010. PROTOTYPING PROTOTYPING. In ARC
Anthropological Research on the Contemporary. Available at: http://
research.gold.ac.uk/4664/1/ARCEpisode3-Prototyping.pdf.
Carleton, T. & Cockayne, W., 2009. The power of prototypes in foresight
engineering. In DS 58-6: Proceedings of ICED 09, the 17th International
Conference on Engineering Design, Vol. 6, Design Methods and Tools (pt. 2).
Palo Alto.
Chatting, D., 2014. Speculation by Improvisation. In DIS 2014 Workshop
on Human-Computer Improvisation.
Chesbrough, H.W., 2003. Open innovation: The new imperative for creating
and proiting from technology, Cambridge (Massachusset): Harvard
Business Press.
Coessens, K., Crispin, D. & Douglas, A., 2009. The artistic turn: a
manifesto, Leuven: Leuven University Press.
Cramer, F., 2014. What is “Post-digital”? APRA, A Peer Reviewed Journal
About Post-Digital Research, 3(1). Available at: http://www.aprja.
net/?p=1318.
Dabbish, L. et al., 2012. Social coding in GitHub: transparency and
collaboration in an open software repository. In Proceedings of the ACM
2012 conference on Computer Supported Cooperative Work. ACM.
Dentaku, 2014. Meet Ototo. Ototo. Available at: http://www.ototo.fm/
products.
Dias, H., 2013. About refarmthecity.org. Re:farm the city. Available
at: http://refarmthecity.org/wiki/index.php/Main_Page [Accessed
December 5, 2014].
Dietz, S., 1999. Why Have There Been No Great Net Artists? Through the
Looking Glass: Critical texts. Available at: http://www.voyd.com/ttlg/
textual/dietzessay.htm.
Dourish, P. & Bell, G., 2011. Divining a digital future: Mess and mythology
in ubiquitous computing, Cambridge (Massachusset): MIT Press.
Dunne, A., 2008. Hertzian tales: Electronic products, aesthetic experience,
and critical design, Cambridge (Massachusset): The MIT Press.
Dunne, A. & Gaver, W., 1997. The Pillow: Artist-Designers in the Digital
Age. In CHI Extended Abstracts.
Dunne, A. & Raby, F., 2007. Critical Design FAQ. Dunne and Raby.
Available at: http://www.dunneandraby.co.uk/content/bydandr/13/0
[Accessed January 6, 2015].
Edmonds, E. & Candy, L., 2010. Relating Theory, Practice and Evaluation
in Practitioner Research. Leonardo, 43(5), pp.470–476. Available at:
http://www.mitpressjournals.org/doi/abs/10.1162/LEON_a_00040.
37
Frost, C., 2012. Media Lab Culture in the UK. Furtherield. Available at:
http://www.furtherield.org/features/articles/media-lab-culture-uk
[Accessed September 20, 2014].
Gaver, W. et al., 2008. Threshold Devices : Looking Out From The Home.
In CHI 2008. Florence.
Gere, C., 2010. Research as Art. In H. Gardiner & C. Gere, eds. Art Practice
in a Digital Culture. Farnham (Surrey): Ashgate, pp. 1–7.
Gershenfeld, N., Krikorian, R. & Cohen, D., 2004. The Internet of
things. Scientiic American, 291(4).
Graham, B. ed., 2014. New Collecting: Exhibiting and Audiences after New
Media Art, Farnham (Surrey): Ashgate.
Graham, B. & Cook, S., 2010. Rethinking Curating: Art after New Media,
Cambridge (Massachusset): The MIT Press.
Greenbaum, J.M. & Kyng, M., 1991. Design at work: Cooperative design of
computer systems, Hillsdale, NJ: L. Erlbaum Associates Inc.
Greenield, A., 2010. Everyware: The dawning age of ubiquitous computing,
San Francisco: New Riders.
Hartmann, B. et al., 2006. Relective physical prototyping through
integrated design, test, and analysis. In Proceedings of the 19th annual
ACM symposium on User interface software and technology - UIST ’06.
New York, New York, USA: ACM Press. Available at: http://dl.acm.org/
citation.cfm?doid=1166253.1166300.
Kelley, T., 2001. Prototyping is the Shorthand of Design of innovation.
Design Management Journal, 12(3).
Kera, D., 2001. Grassroots R&D, prototype cultures and DIY innovation:
global lows of data, kits and protocols. In Pervasive Adaptation. Linz:
Institute for Pervasive Computing, p. 52.
Kera, D., 2013. On Prototypes: Should We Eat Mao’s Pear, Sail SaintExupéry’s Boat, Drink with Heidegger’s Pitcher or Use Nietzsche’s
Hammer to Respond to the Crisis? In V. van Gerven Oei, W. N. Jenkins,
& A. S. Groves, eds. Pedagogies of Disaster. New York: punctum books.
Knowles, J.G. & Cole, A.L. eds., 2007. Handbook of the arts in qualitative
research: Perspectives, methodologies, examples, and issues, Sage
Publications.
Krysa, J., 2006. Curating Immateriality: The work of the curator in the age
of network systems J. Krysa, ed., New York, New York, USA: Autonomedia.
Loenen, J. van, 2013. DIY (Drone It Yourself). Jasper van Loenen. Available
at: http://jaspervanloenen.com/diy/ [Accessed December 5, 2014].
Muller, L., 2008. The experience of interactive art: a curatorial study.
University of Technology, Sydney.
Negroponte, N., 1998. Beyond digital. Wired 6(12), p.288.
Oliver, J., Savičić, G. & Vasiliev, D., 2011. The Critical Engineering
Manifesto. Available at: http://criticalengineering.org/ [Accessed
January 6, 2015].
Paul, C., 2008. New Media in the White Cube and Beyond: Curatorial Models
for Digital Art C. Paul, ed., Berkely: University of California Press.
38
Rhinow, H., Köppen, E. & Meinel, C., 2012. Prototypes as Boundary
Objects in Innovation Processes. In Proceedings of the 2012 International
Conference on Design Research Society (DRS 2012). Bangkok, pp. 1–10.
Rosell, B., Kumar, S. & Shepherd, J., 2014. Unleashing innovation
through internal hackathons. In Innovations in Technology Conference
(InnoTek), 2014 IEEE.
Rudd, J., Stern, K. & Isensee, S., 1996. Low vs. high-idelity prototyping
debate. interactions, 3(1), pp.76–85.
Schrage, M., 1993. The culture (s) of prototyping. Design Management
Journal (Former Series), 4(1), pp.55–65.
Scrivener, S. & Clements, W., 2010. Triangulating Artworlds: Gallery,
New Media and Academy. In H. Gardiner & C. Gere, eds. Art Practice in a
Digital Culture. Farnham (Surrey): Ashgate, pp. 9–25.
Seravalli, A., 2013. Prototyping for opening production: from designing
for to designing in the making together. In 10th European Academy
of Design Conference - Crafting the Future. Göteborg: University of
Gothenburg.
Shepard, M., 2010. Sentient City Survival Kit - info. Sentient City Survival
Kit. Available at: http://survival.sentientcity.net/ [Accessed November
28, 2014].
Shepard, M., 2009. Sentient City Survival Kit : Archaeology of the Near
Future. In Digital Arts and Culture 2009. University of California.
Subrahmanian, E. et al., 2003. Boundary Objects and Prototypes at
the Interfaces of Engineering Design. Computer Supported Cooperative
Work (CSCW), 12(2), pp.185–203. Available at: http://link.springer.
com/10.1023/A:1023976111188.
Sutherland, I. & Acord, S.K., 2006. Thinking with art : from situated
knowledge to experiential knowing. Journal of Visual Art Practice, 6(2),
pp.125–140. Available at: http://dx.doi.org/10.1386/jvap.6.2.125_1.
Vines, J. et al., 2013. Coniguring Participation : On How We Involve
People In Design Republic of Ireland. In Proceedings of the SIGCHI
Conference on Human Factors in Computing Systems CHI 2013. Paris: ACM,
pp. 429–438.
Visser, F.S., 2014. New circles keep popping up. Crisp Magazine #3.
Weiser, M., 1991. The computer for the 21st century. Scientiic american,
265(3), pp.94–104.
Weiser, M. & Brown, J.S., 1997. The coming age of calm technology.
In P. J. Denning & R. M. Metcalfe, eds. Beyond calculation. New York:
Springer, pp. 75–85.
Winter, M. & Brabazon, T., 2010. The intertwining of researcher ,
practice and artifact in practice-based research. Practicing Media
Research, (January), pp.1–17.
Wittgenstein, L., 2010 [1953] Philosophical investigations, John Wiley &
Sons.
39
Yair, K., 2010. ACTIVISM AT WORK - CRAFTING AN ALTERNATIVE
BUSINESS. In 7th Conference of the International Committee of Design
History and Design Studies (ICDHS). Brussels. Available at: http://
historiadeldiseno.org/congres/pdf/10 Yair, Karen ACTIVISM AT WORK.
CRAFTING AN ALTERNATIVE BUSINESS.pdf.
xCoAx 2015
Computation
Communication
Aesthetics
and X
Glasgow
Scotland
2015.xCoAx.org
Interpretation,
Representation,
Material Properties:
Three Arguments About
Aesthetic Qualities
of Computational Media
Alessio Chierico
Interface Culture – Kunstuniversität, Linz, Austria
[email protected]
Keywords: Interpretation, representation, materiality,
performativity, aesthetics, computation
Computational visual media presents peculiar aesthetics features
that are commonly explored by an instrumental use of technology. This paper introduces three art projects that propose a different perspective on enquiries of digital aesthetics: these works
are based in conceptual frameworks, that highlight the technological manifestations of visual devices. The aim of these projects is to brake the immersivity of media representation, in order
to reveal the essence of digital image. Each project relects into
speciic concepts, here called: interpretation, representation, and
material properties. These argumentations take into account the
processuality of computation, as well as the materiality of media,
as distinctive elements which place the medium identity over its
function.
41
1 Aesthetics of technological expression
Artistic production is often concerned to an instrumental use of
technology, to achieve a purposes that focuses on attractiveness,
forgetting or hiding qualities that are speciic to a medium. In
different cases, the aesthetic qualities of technology are exploited
and explored during the production process (Levin 2009). Several theoreticians argue that aesthetic properties of media must
be observed in their internal mechanism and behavior (Terranova 2014, Ribas 2014, Broeckmann 2005, Penny 2008, Parisi
et al. 2011). Supporting this argument, we should also refer to the
approach of Software Art (Cramer 2002, Levin 2009). Therefore,
this discussion asks us to entertain a perspective that accounts
for the materiality of media (Blanchette 2011, Fuchsberger et
al. 2013), considering the role this position can play in terms of
aesthetics. This paper presents three art projects made by the
author, which propose an approach that deeply focus on media
aesthetics, positioning the instrumental role of technology in the
background. The main aim of this approach is allow technology
to express its own qualities, underlining them into a conceptual
frame. Each project described here refers to media qualities in
visual aesthetics, formalized in the concepts of: interpretation,
representation, and material properties. Interpretation consider
computation as performative aspect of digital media. This quality concerns the response of different devices in performing a
same set of given instructions. Representation is a historical purpose of visual media, which is reiterated till the latest developments of imaging technologies (Bolter at al. 2002, Huhtamo 2004,
Manovich 2009). Representation involves two complementary
operations: the image capturing process, and its formalization as
visible image. These tasks, see the technical apparatus both as
central actor of representation and as main agent of the image
ontology itself. Material properties concept underlines the relevance of physicality in digital visual media. The example which
focus in this aspect, shows aesthetic potentials that are expressed
by the material essence of visual devices.
2 Interpretation: “Arnulf Rainer for Digital
Performers”
“Arnulf Rainer for Digital Performers, concert version” is a project that relects in the aesthetics of visual computational systems,
focusing on their interpretation property. Following, it is argued
how the performative essence of certain technologies, lead to an
emergence of the system identity. This work is an installation
presented as a visual concert, which reenact the ilm “Arnulf
42
Rainer” made by Peter Kubelka. This ilm is composed by an
alternation of black and white frames, which create a stroboscopic
effect. In order to highlight the performative nature of computational media, the installation follows the concert metaphor: several monitors are placed next to each other like an orchestra on a
stage. Another monitor with a computer placed in front of them,
assumes the role of conductor. The project is technically and conceptually based in a process that happens in software level. In the
main computer (conductor) a program analyzes and recreates the
original Kubelka composition in real time, in order to extract the
color value (black or white) for each frame. In the meaning time,
the status of each frame is sent to devices that control the monitors, in order to be reproduced. Thus all the monitors should show
the same color at the same time, reenacting the visual score composed by Kubelka. According to the metaphor proposed, the main
computer, considered as conductor, is practically instructing the
devices (players) to perform the given composition. Like in a concert, each of these systems interprets the commands suggested by
the conductor, by means of its own technical qualities.
Fig. 1 Arnulf Rainer for Digital
Performers, concert version
(https://vimeo.com/74364647)
2.1 Interpretation and performativity
as systems behaviours
Computational media have performative nature which derives
from their processuality (Manovich 2011), and it must be taken
into account as founding principle of an aesthetical analysis (Ribas
2014, Broeckmann 2005). Computational agency requires a particular attention toward systems behaviour (Penny 2008), which
leads one to consider ethical and political implications, which are
embedded in the processual automatism of computational technologies (Terranova 2014, Parisi et al. 2011). Since performativity
is considered a key aspect of computational media, we can metaphorically assume its similarities toward music performance. From
this perspective we can regard digital code as notation or script,
and the whole computational system as a musician who interprets
43
a music score (Cramer 2002). Interpretation concerns the ability
of recreate a certain event, according to the understanding of the
event itself and the skills required to perform it. We can take in
to account how interpretation works in music: performers must
follow a script that informs them what they should play. Certainly,
they aims to faithfully represents the original score, but the result
of their performance depends on their qualities. In “Arnulf Rainer
for Digital Performers” is possible to observe that each monitor
respond differently to the instructions sent by the main computer.
First of all, during the communication some information get lost.
Secondly diverse models of monitors show their own light color
and refresh speed. As well as in music, in this project, the system
interpretation emerges from the understanding of the score and
the technical quality for performing it. However, the metaphors
of interpretation as well as performativity are used here as concepts which should not be limited to digital media. It is important to remark that these instances are proper of every technology
which acts procedural tasks autonomously, including all devices
in the domain of mechanical and analog technologies.
2.2 Structural Film and “Arnulf Rainer”
by Peter Kubelka
During the 60s and 70s Experimental Cinema and particularly
the Structural Film were movements that criticize the cinematic
iction, applying an artistic research which investigated speciicities and aesthetic potentials of the technical apparatus: ilm, lens,
camera, cinematograph, etc. ... (Gidal 1978, Sitney 2002). Structural Film research also included the mechanical and procedural
functioning of the machine. The approach proposed by Structural
Film was a pioneering approach for the further development of
new media art (Chierico 2013), and is still valid as a method for the
investigation of media properties. For this reason Structural Film
was directly inspirational for the project “Arnulf Rainer for digital
performers, concert version”. Kubelka’s work minimized the cinematographic medium, touching upon its technical essence based
on light and variation. “Arnulf Rainer” is a ilm that does not proposes any content: looking to the minimal image composed by
monochromatic frames, it is just possible to observe some physical impurities on the ilm, like dust and scratches. These elements prove fundamental: they expose the identity of the cinematographic medium.
44
2.3 Diferences: a deconstructive method
Interpretation is a pivot point for the conceptual development of
“Arnulf Rainer for Digital Performers, concert version”. The whole
system interprets the Kubelka’s composition, like orchestra players interprets a given score. In this work is possible to perceive the
different ways which players performs, according to various technical factors. Indeed is possible to notice how monitors behave:
communication lacks between the players and the conductor,
image refresh speed, and monitor light color, are just some examples of the technical issues and diversities which drives a different
interpretation of a same script. However is important to remark
that the image produced by these devices is notably distinct from
the one produced by the cinematograph of the Kubelka’s version.
Dust and impurities are unavoidable elements of that old technology, as well as actual monitors have lat and could colour as
distinctive elements of their technical apparatus.
3 Representation: “Emerging Aura”
“Emerging Aura” is a work that explores the representational qualities of imaging technologies used in digital photography. Visual
representation is a result of a process which includes an input:
the subject capture and codiication into digital domain, and an
output: the image visualization. “Emerging Aura” focuses on the
acquisition process of images, to show how this action determines
the aesthetic of representation and the identity of the capturing
device. This work is formalized as installation that consists in a
video projection mapped on a monitor. The video is composed by
a sequence of sixty nine photos of this same monitor, taken from
as many different devices, like: web cams, smartphones, cameras, video cameras, laptops, tablets etc. ... All of these pictures
were cropped and scaled in order to overlap each other in a same
predetermined size. The resolution used for the image scale was
found calculating the average between the lower and the higher
Fig. 2 Emerging Aura
(https://vimeo.com/92341020)
45
resolution of the original pictures. This editing permits to use
these images as frames of the video. The monitor become the
surface of the video projection, where the images are mapped in
order to be overlapped with the real monitor.
3.1 “Emerging Aura”: aim and method
This project aims to highlight the ictional status of visual representation, showing how the subject depicted in the pictures is
differently interpreted by diverse technical apparatus. This process exposes the speciicity of the devices used. “Emerging Aura”
employs a deconstructive approach toward representation, applying a method based on differences. The immersivity of representation tends to hide the visual qualities of digital image. When
the user is involved in the narrativity of representation, images
aberrations are slightly perceived, (O’Regan 1992) because the
attention moves away from the image itself for focusing into the
content. Comparisons between images underline their differences, therefore the denaturation of representation permits to
delineate the subjectivity of visual media. Moreover, the projection mapped on the monitor represented by the pictures, offers
another evident difference, that emerge by his relation with the
original shape of the monitor. The virtuality of representation is
overlapped with the object physicality. This attempt to combine
them as unique element, at contrary, works as evidence of the real
distance between them.
3.2 Speciicity as media uniqueness
The name “Emerging Aura” is inspired to the concept of aura by
Walter Benjamin (2000). In order to avoid any misunderstanding,
must be clariied how this term is used. In short, Benjamin’s concept of aura is meant as a supposed value given by the uniqueness of unreproducible art pieces (Benjamin, 2000). However, this
project exploits the term aura in different sense: it is considered
as uniqueness of media driven representation. This is a conceptual step that wants to underline how every medium conserves
its uniqueness in the peculiar aesthetics that emerges from its
technical properties. Contents shown by media are obviously
characterized by technological qualities of the whole process of
recording and reproduction. In “Emerging Aura” there are two
conceptual elements that must be highlighted: irstly, the monitor, which is commonly the object that shows the representation, becomes in this context the subject of representation, as
well as his physical support. The second important element is the
shadow of the devices, which is projected on the monitor since the
46
shooting moment. The device records a trace of his presence in a
space and time, representing itself. Here the aura emerges connecting the monitor representation with the physical uniqueness
of the medium: its nature of object.
3.3 A survey on representation
Considering the historical trajectory of visual technologies, in
one hand we can notice that representation drove the evolution
of devices which attempts to simulate the real, framing contents
from the context. During the Renaissance, Leon Battista Alberti
theorized the perspective comparing the space of representation
to “an open window through which the subject to be painted is
seen.” (Bolter & Grusin, 2002) The intent to simulate reality is
clear, and it is also obvious how this comparison promises that
a potential virtual reality can occur there, to identify this very
moment as pivot point for the further progresses in the history
of representation. Manovich (2009) and Huhtamo (2004) found
that the frame plays an important role, empowering the “window” metaphor, and isolating the ictional space from the context where is placed. For this reason the window of Alberti is seen
by Vilem Flusser (1977), as a door where you can enter in a new
reality. Representation is a powerful driving force of the development of media, but it is totally disinterested of them. Visual
media aims to satisfy the representation hiding themselves. In
this way the media fruition is strongly alienated from the complex nature of object. (Bolter & Grusin, 2002) Recent imaging
technologies increasingly moves the task of representation from
hardware to software domain: algorithms are in charge of reconstruct the image from its capture, in order to supply an attractive
quality. This bring to a deep abstraction of the image referent; as
found by Hito Steyerl, this process suggests a parallelism with the
functioning of representation concept in democratic politics. In
other terms, the aesthetical distortion that elapses between the
subject and its algorithmic depiction, corresponds to the distortion between the democratic system and its mediated conception.
(Jordan 2014) For this reason Steyerl stands for the poor image: in
the seams of its aesthetic, are contained signs which manifests
the technological mediation of cultural practices. (Steyerl 2012)
4 Material properties: “Unpainted Undrawn”
“Unpainted Undrawn” shows the potential aesthetics which are
intrinsic to the materiality of visual digital devices. Technologies
of representation are historically bound to the physicality of their
support, but the dematerialization of the image, which occurred
47
with the advent of digital devices, brought the idea that representation does not belongs anymore to the visual support. Instead,
the physicality of it is an unavoidable and active element of representation. “Unpainted Undrawn” is a series of works that consist of cracked screens, inserted in classical and modern picture
frames. These screens are collected from dismissed devices such
as: tablets, ebook readers, smartphones, monitors, and several
other kinds of digital devices with broken screens. In LCD screens
the array of pixels is contained by a panel of crystal liquids. In
case this panel is damaged by crash, the liquids can mix in the
matrix, creating unrecoverable spots into the screen. Another
common damage of LCD monitors can occurs to the tiny connections that controls the state of each pixel. This kind of damages
creates rows or columns of coloured pixels, that remains permanently active into the monitor. Similarly to LCD screens, other
imaging technologies have their features in exposing their materiality when damaged. For instance electronic paper, commonly
used in e-book readers, is another technology used in “Unpainted
Undrawn”. It is here argue that damages of screens, reveals the
material essence of digital based images, as well as their potential aesthetics. This is the argument which drove the development
of this project. The frame into which these screens are inserted
plays a very important role within the concept of this work. First
of all, the frame relates to the ancient and romantic stereotype of
artwork. Thus the frame’s presence is an ironic attempt to elevate
the aesthetics of cracked screens to the status of art. At the same
time, the frame recalls the medium of painting, and therefore its
materiality, which is explicitly relevant for image creation. In the
history of painting, the awareness of materiality emancipated the
technique from the status of lat visual representation. In a similar way, the images of these devices are not representations driven
by a software, but a physical expression of the screen/medium.
Fig. 3 Untitled from
“Unpainted Undrawn” series
48
4.1 The materiality of digital media
The word “media” has a communicative connotation, however it
is important to underline that when we refer to media, we should
also take into account its objectual nature. Conversely, objects
have embedded communicational properties that must be taken
into account. Starting from this perspective it is important to consider the concept of transparency, formulated by Bolter and Grusin
(2002). From their point of view, transparency is the attempt of a
medium to hide its objectual identity: devices are designed to be
transparent in order to highlight the iction of representation. As
Huhtamo noticed: “The history of the screen luctuates between
the imagination and the world of things. As gateways to displaying and exchanging information, screens are situated in the liminal zone between the material and the immaterial, the real and
the virtual”. (Huhtamo 2004) In reference to digital systems, bits
are commonly intended as immaterial mathematical units. On
the contrary, bits are physical, electrical entities which operate as
mathematical units for calculation purposes. Even if this seems
obvious, it is very useful to remind ourselves that digital systems
cannot exist without the physical constraints of bits and devices.
(Blanchette 2011) For this reason materiality is a property which
must be taken into account from several points of view, and in
every conception, analysis, understanding and design relating to
digital media.
4.2 Aesthetics of seams
Structural Film, as already argued, bases its aesthetics on the
seams which exposes the technical nature of cinema. One direction of this movement, named Materialist Film (Gidal 1978), was
more directly concerned on expressing the materiality of the
medium, as previously shown with the example “Arnulf Rainer”
by Kubelka. In traditional Japanese ceramics, artifact mending is
seen as an aesthetic opportunity, where imperfection becomes a
value (Kopplin 2008). This is evident in the Kintsugi method: fractures between pieces of a broken ceramic are illed with precious
metals (gold, silver or platinum). In Kintsugi, mending ceramics
doesn’t have the sole intent of restoring the object’s functionality,
it is an aesthetical choice that shows the physical consistency of
the artifact as well as a story of the object. Moreover, the randomness of the fractures determines the artifact’s uniqueness. (Zoran
et al. 2013, Ikemiya et al. 2014) Similarly, “Unpainted Undrawn”
exploits the occurrence of damage in order to offer an aesthetic
formed by object materiality. It is relevant to notice that the
images shown in “Unpainted Undrawn”, looks aesthetically and
49
conceptually near to Glitch Art. In both cases, the images proposed are expression of a technical feature, which emancipate
themselves over the contents of representation. However, the
aesthetics of error which Glitch Art refers, emerges from the performativity of computational media. (Parisi 2011) Thus, glitch is
an error induced by algorithms (intentionally or not) that does not
presumes any structural intervention or damage.
5 Conclusion
This paper presented an artistic approach which relects on aesthetic features of digital visual media. Interpretation, representation and material properties are concepts that illustrates some
aesthetic and ontological issues of digital image. Here the intention is to show an artistic method which focuses on the speciic
properties of media, and that moves their instrumental use to
the background. This movement corresponds to an emergence of
image technologies, and to a deconstruction of contents, which
results in a rupturing of the iction of representation. Certainly,
representation is the key element of this argumentation. Without
the intent of representation, any visual medium loses its raison
d’être, because there are no media which are purely conceived for
abstractism. However, it is important to clarify that all the three
concepts which are expressed in this text does not correspond to
a taxonomy of digital visual media, and they are not separated
between them. At the opposite, they are complementary: as stated
previously, materiality is an unavoidable aspect of every medium
which determines limits and potentials of the image, and if we
consider representation as media purpose, interpretation concerns the ability of achieve this purpose. In conclusion, it is necessary to highlight that the projects described here assumes an
aesthetic and demonstrative role. In fact, even if they point to the
vanishing of representation, they are themselves a representation
of concepts. The works development raises an unsolved issue that
leads into the vortex of impossible coherence: how can technology truly express itself, when it is an artistic/human intention to
deine meanings and reasons for its aesthetics? In other words, is
it possible to consider these projects as platforms for technological expression, or as appropriations of technological aesthetics? In
both cases, the motivation behind this deconstructive approach
is a desire to elevate the imaging technologies to the state of art,
in order to criticize our mediated representation of the world. An
iconoclasm moved by technological realism.
50
References
Benjamin, Walter. L’opera d’arte nell’epoca della sua riproducibilità tecnica.
Torino, Einaudi, 2000
Blanchette, Jean-François. A material history of bits. Journal of the
American Society for Information Science and Technology. Volume 62,
Issue 6, pages 1042–1057, June 2011
Bolter, Jay David; Grusin, Richard. Remediation. Competizione e
integrazione tra media vecchie e nuovi. Milano, Guerini Studio, 2002
Broeckmann, Andreas. Image, Process, Performance, Machine. Aspects of
a Machinic Aesthetics. Lecture manuscript for the Refresh! conference,
Banff/Canada, 29 Sept. 2005 - http://www.mediaarthistory.org/
wp-content/uploads/2011/05/Andreas_Broeckmann.pdf - accessed on
04/01/2015
Chierico, Alessio. Techno indeterminism, the common line between
structural ilm and new media art. Digimag journal. Issue 74 / Winter
2013.
Cramer, Florian. Concepts, Notations, Software, Art. netzliteratur.net.
23/03/2002 - http://www.netzliteratur.net/cramer/concepts_notations_
software_art.html - accessed on 05/01/2015
Flusser, Vilem. Two Approaches to the Phenomenon, Television. In Davis,
D. & Simmons, A. (Eds.), The New Television: A Public/Private Art.
Cambridge: MIT Press, 1977
Fuchsberger, Verena; Murer, Martin; Tscheligi, Manfred. Materials,
Materiality, and Media. In Proceeding CHI ‘13 of the SIGCHI Conference
on Human Factors in Computing Systems. Pages 2853-2862. ACM New
York, NY, USA 2013.
Gidal Peter. Structural Film Anthology. London, Bi, 1978
Huhtamo, Erkki. Elements of screenology: toward an archaeology of the
screen. Published in ICONICS:International Studies of the Modern
Image, Vol.7, pp.31-82. Tokyo: The Japan Society of Image Arts and
Sciences, 2004
Ikemiya, Miwa; Rosner, Daniela K.. Broken probes: toward the design of
worn media. Personal and Ubiquitous Computing, Volume 18 Issue 3,
Pages 671-683, Springer-Verlag, London, 2014
Jordan, Marvin. Hito Steyerl, Politics of Post-Representation. DIS. 2014
- http://dismagazine.com/disillusioned-2/62143/hito-steyerl-politics-ofpost-representation/ - accessed on 25/07/2014
Kopplin, Monika. Flickwerk, The Aesthetics of Mended Japanese Ceramics.
In: Exhibition catalogue, Herbert F. Johnson Museum of Art, Cornell
University, Ithica, 2008
Levin, Golan. Audiovisual Software Art: A Partial History. long.com
09/05/2009 - http://www.long.com/texts/essays/see_this_sound_old/ accessed on 05/01/2015
Manovich, Lev. Il linguaggio dei nuovi media. IX edizione, Milano,
Olivares, 2009
51
Manovich, Lev. Software Culture. Milano, Olivares, 2011, Trad. di:
“Software Takes Command”
O’Regan, Kevin. Solving the “real” mysteries of visual perception: The
world as an outside memory. Canadian Journal of Psychology/Revue
canadienne de psychologie, Vol 46(3), Sep 1992, 461-488.
Parisi, Luciana; Portanuova, Stamatia. Soft thought (in architecture and
choreography). Computational Culture Journal, Issue 1, November 2011
Penny, Simon. Experience and abstraction: the arts and the logic of
machines. The Fibreculture Journal. 11 Issue 5, 2008
Ribas, Luísa. Perspectives on digital computational systems as aesthetic
artifacts. CITAR Journal, Volume 6, No. 1 – Special Issue: xCoAx 2014
Sitney, Adams. Visionary Film: The American Avant-Garde, 1943-2000.
Oxford University Press. 2002
Steyerl, Hito. In Defense of the Poor Image. In Steyerl, Hito. The Wretched
of the Screen. e-lux journal books. New York, 2012
Terranova, Charissa N. Systems and Automatisms: Jack Burnham, Stanley
Cavell and the Evolution of a Neoliberal Aesthetic. Leonardo, Journal of
Arts, Sciences and Technology Vol. 47, Issue 1, pp 56-62. 2014
Zoran, Amit; Buechley Leah. Hybrid Reassemblage: An Exploration of
Craft, Digital Fabrication and Artifact Uniqueness. Leonardo, Journal of
Arts, Sciences and Technology Vol. 46, Issue 1, 2013
‘Let’s Talk Business’:
Narratives Used in Email
and Phone Scams
xCoAx 2015
Computation
Communication
Aesthetics
and X
Glasgow
Scotland
2015.xCoAx.org
Andreas Zingerle
University of Art and Design, Linz, Austria
[email protected]
Keywords: phone scams, audio installation, interactive
storytelling, reverse engineering, artivism.
16th century ‘face to face’ persuasion scams adopted to letters,
telephone, fax and Internet with the development of new communication technologies. In many of today’s fraud schemes phone
numbers play an important role. Various free-to-use on-line tools
enable the scammers to hide their identities with fake names,
bogus business websites, and VoIP services. These fake businesses or personas can appear more legitimate when connected
to a phone number, enabling a faster, more personal contact to
the victims. With the typology of a sample probe of 374 emails,
commonly used in business proposal scams, the emails were categorized and tested to see how believable the proposals sound
once the scammers were contacted by phone. The research can be
explored in a 5-channel interactive audio installation called ‘Let’s
talk business’ that uncovers which business proposals and scam
schemes are commonly used, and how believable the proposals
sound once the scammers are called.
53
1 Introduction
Phone fraud can be described as a ‘fraudulent action carried out
over the telephone’ and can be divided into ‘fraud against users by
phone companies’ (cramming, slamming), ‘fraud against users by
third parties’ (809-scams, dialer programs, telemarketing fraud,
caller ID spooing) ‘fraud against phone companies by users’
(phreaking, dial tapping, cloning) and ‘fraud against users by users’
(vishing, SMS spamming). The different fraudulent actions can
also be divided into technical hacking, social hacking, and mixes
of both. (Rustad, 2001) The ‘phreaking’ subculture, which is also
seen as a forerunner of hacking culture, makes use of both technical and social hacking tactics. By the middle of the 20th century
technophiles started exploring the US AT&T network. Listening
to the sounds during the in-band signaling connection process,
they learned how the network was set up, which metadata was
transferred, and were able to reverse engineer the routing of each
call. These ‘phone phreaks’ created so called ‘blue-, red-, or rock
boxes’ to dial tones and other audio frequencies to manipulate
the phone system. With these devices it was possible to make free
long distance phone calls, which was illegal and called ‘toll fraud’.
They also used social engineering techniques to impersonate
operators and other telephone companies. Important representatives of this subculture include John ‘Captain Crunch’ Draper,
Steve Jobs or Steve Wozniak, who later founded Apple Computers.
In the 80s computer hackers began to use phreaking methods to
ind telephone numbers of business modems to exploit them. Due
to technological advancements, phreaking has lost its popularity,
but is still marginally practiced. (Lapsley, 2013)
Whereas ‘phreakers’ mainly focused on technical hacks, curious anti-scam activists called scambaiters adapted more of the
social engineering tactics to ind methods to safely communicate
with scammers, inding out how the scams work in order to warn
potential victims. This paper focuses on the ‘user to user fraud’
that is done by email and phone scams. Typically these scams
involve storytelling and some sort of social engineering, where
the fraudster creates a hyper-realistic ‘too good to be true’ situation for a mark, in order to extract sensitive data and/or money
from the victim. (Maggi, 2010) (Mitnick, 2002) These scambaiters host informative websites where scams are reported and host
forums where people can discuss suspicious business proposals.
There are several forums dedicated to either speciic scam genres
(e.g. romance scams or rental scams) or used technology (e.g. email
scams, phone scams). One of these platforms is ‘Scamcallighters.
com’, a non-proit organization that maintains a user-contributed
database of phone numbers that are used in scam attempts. This
54
organization aims to help people who are under threat of inancial loss due to phone scams. On their website they widely publish
scam related phone numbers, details of scam incidents and inform
about ongoing cybercrime attacks. Conidence tricksters often
attack victims who have already been scammed, duping them to
pay even more money. The best defense against phone scams is
knowledge about this type of scams and a public blacklist.
The artwork ‘Let’s talk business’ is an outcome of an exploration with the aim of understanding in which scam narratives
phone numbers are used and how the narratives are extended
when the scammers are approached by calling them. Therefore
this paper addresses the following issues:
• In Section 2 a sample of scam emails taken from an anti-scam
database are categorized into 10 scam types with the focus of
the top 5 countries from which the phone numbers originate.
• Section 3 documents the process of calling scammers and the
design of the artwork ‘Let’s talk business’.
• In Section 4 the artwork ‘Let’s talk business’ is contextualized
in a historical canon of related telephone artworks.
2 Related works
In 1922 Laszlo Moholy-Nagy wanted to prove that the intellectual
approach to the creation of a work of art is in no way inferior to the
emotional approach. He called a sign manufacturer and ordered
‘three steel panels of diminishing size, covered with white porcelain enamel and bearing a simple geometric design in black, red
and yellow’. He didn’t provide any sketches, nor did he supervise
the execution of his order. (Kac, 1992)
In 1969, the Chicago Museum of Contemporary Art hosted the
exhibition ‘Art by Telephone’. In the cover text of the catalog, Jan
van der Marck explains that the exhibition was planned to record
the conceptual art trend. The exhibited artworks should be conceptualized and designed by the artists in their countries. These
concepts were transmitted to Chicago and then executed on-site at
the museum on their behalf. The telephone was a itting medium
to communicate the instructions between artists and the ones
who were entrusted with the production of the artwork. Inspired
by Laszlo Moholy-Nagy’s experiment, no drawings, blueprints, or
written descriptions were allowed. These calls were recorded and
made into a vinyl record. The sound recording became the show,
as the works were never fabricated. Artists included Jan van der
Mack, Richard Artschwager, John Baldessari, Robert H. Cumming,
Dick Higgins, SolLeWitt, Bruce Nauman, Wolf Vostell amongst
others. (Art by Telephone, 2010)
55
By the 1970s Andy Warhol supposedly had no more substantial
involvement during the production of his paintings. The printer
Rupert Smith once claimed that even Augusto the security man
was doing paintings. Andy was giving a lot of instructions on
what the paintings should look like and which colors the printer
should use over the telephone. (Colacello, 2014)
In spring 2011 Terri C. Smith curated the exhibition ‘It’s for you
- conceptual art and the telephone’ at the Housatonic Museum
of Art in Bridgeport, Connecticut. (Smith, 2011) The exhibition
brings together artworks that use the telephone as a medium or
as a mediator, which it the category of ‘ludicrously simple ideas,
but one that allows itself to be complicated and expanded through
a myriad of formal and intellectual approaches’. The artworks use
language-as-media, democratic impulses through audience participation and broader distribution methods. During the exhibition people performed John Cage’s ‘Telephones and Birds’, where
three people perform the work using bird-calls and public service
messages from phones.
In early works the telephone has been used to remotely produce artworks by giving instructions. Later, the phone was used
to connect artists with strangers or as an interface to access prerecorded audio messages. Whereas the act of dialing a number,
talking and listening remained similar, the technological systems
changed from landlines to mobile and VoIP telephony.
In Yoko Ono’s piece “It’s for you” the artist might call the gallery
as part of her Telephone Piece, providing direct contact between
artist and the audience. Pietro Pellini’s work ‘Al Hansen on My
Telephone’ is an archive of the Fluxus artist Al Hansen. When the
phone rings in the gallery, short audio clips of him talking about
art, life, the Ultimate Akademie and other topics can be listened
to. (Smith, 2011)
Other recent artworks include ‘The representative’ by Carey
Young. In the installation visitors are invited to ‘get to know’
the call center agent, who normally represents large corporations to the public. The agent was hired by the artist and was
scripted to talk about certain topics based on interviews with the
agent (Young, 2006). In 2006 the !Mediengruppe Bitnik created
the artistic intervention called ‘Opera Calling’. They placed cell
phones, so called ‘audio bugs’ within the auditorium of the opera
house in Zürich, in order to give the outside public a possibility
to access the performances. The performances were also retransmitted to the public through a calling machine that called each
person in Zürich individually. (!Mediengruppe Bitnik, 2006) The
artwork ‘The evidence of things not said’ by Afshar, Brunnthaler,
Schulze is a prepared public phone booth, where people can listen
56
to an archive of racist incidents documented in Austria by the
anti-racism organization called Zara. (Afshar, 2009)
3 The Narratives used in Email scams
As a raw dataset I took a sample probe of 374 emails with phone
numbers that were collected over a time period of three weeks
from Nov. 11 to 30, 2014, from the ‘scammed.by’ scam email database. In 2010 this website was created under the name ‘baiter_base’,
a place for scambaiting activists who document the activities of
Internet scammers. The website provides a service to send in suspected scam emails, which are then automatically analyzed, categorized and published. From the emails we then I extracted the
phone numbers per country. The numbers were most frequently
connected to the following countries: Benin (113 emails), Nigeria (90 emails), USA (32 emails), South Africa (23 emails), Burkina
Faso (18 emails). These top ive countries, in total 277 emails,
were further categorized according to their narratives structures.
Table. 1. and Chart 1. illustrate the ten different scam schemes
that originated from these top ranked countries. In the following paragraphs, each scam scheme is described with an example
email snippet.
Table. 1 Scam schemes of the top
5 countries
Chart. 1 Graphical representation
of the scam schemes of the top
5 countries
57
3.1 The ‘Fund Transfer from Bank’ Scam
In this scam attempt the fraudster claims to be (or be related to) a
bank representative who offers a high sum of money to be transferred to the victim’s account. Normally a small service charge has
to be paid for inalizing the paper work and sending the money to
the victims account:
I have Paid the fee for your Cheque Draft. But the manager of ECo
Bank Benin told me that before the check will get to you that it will
expire. […] Finally, make sure that you reconirm your Postal address
and Direct telephone number to them again to avoid any mistake on
the Delivery and Let me repeat again,try to contact them as soon as
you receive this mail to avoid any further delay and remember to pay
them their Security Keeping fee of $45 for their immediate action. [...]
You can even call the Director of BELLVIEW DIPLOMATIC COURIER
COMPANY DR.UGO LORD with this line +229--997-88-167
3.2 The ‘Beneiciary Payment’ Scam
A bank representative contacts the victim because of a beneiciary payment that the victim should receive, mostly without any
further details. The victim is asked for personal information and
advised to get in touch with the bank:
This is the second time we are notifying you about this said fund.
Please as a matter of urgency, you are required to verify the following
information and inform us if you are aware or know anything about
this. This morning Mr. John T. Kehoe came to the ofice claiming that
you have instructed him to come and receive the payment on your
behalf with some representatives.
I have ask them to come back tomorrow as they did not provide any
power of an attorney from you which will proof that you thoroughly
send them, This was to enable me contact you to verify how genuine
this people are to you. We wait for your call on +234-8120695218 or
email at <[email protected] com> and urgent respond
to this bank so that you will be giving an immediate response.
3.3 The ‘Product-selling’ Scam
The victim is contacted by a shop owner or bank clerk who wants
to ship a valuable product to the victim, which has already been
paid for. A small fee for sending or insuring the item has to be
paid in advance:
You are advised to ill this application form and return it back to
us as to enable us proceed your ATM VISA CARD. Being desirous of
availing the facility of using ATM Visa Debit card, I/We furnish the
information below. […]
58
3.4 The ‘Follow-up’ Scam
The follow-up scam addresses former scam victims who fell for an
unsolicited offer and paid money to a fraudster. An organization
like the Nigerian EFCC, the US FBI, the UN or World Bank claims
to compensate a number of scam victims. The victims just have
to provide evidence that they lost money and can then get some
amount of money refunded:
This is to bring to your notice that we are delegated from the
UNITED NATIONS in Central Bank to pay 50 victims from your
country who has being Victims of Internet scam .The United Nations
has decided to pay you $8,500,000 USD (Eight Million Five Hundred
Thousand Dollars) each. You are listed and approved for this payment as one of the scammed victims to be paid this amount […]
This email is to all the people that have been scammed or extorted
money from […] We found your email in our list and that is why we
are contacting you […] Contact Pastor Johnson Morris immediately
for your Cashier Cheque.
3.5 The ‘Lottery Winner’ Scam
The recipient of the email supposedly won in a lottery, often run
by the Spanish state, El Gordo and La Primitiva. Other lotteries
include the Microsoft lottery, where people win by just using Microsoft products or email lotteries where email addresses win a
prize. Sometimes, fraudsters claim to represent a multinational
corporation that draws an employee lottery and contact the company workers worldwide using the work email addresses.
Please have you received the USD$2.850,000.00 that your email
ID won for you? If this email ID is still active and working feel free
to contact Dr.Bankole Williams of EcoBank Plc to claim your winning fund. Contact person: Dr.Bankole, Tel: +229 98393457 Once you
contact them they will instruct you how to receive your winning fund.
Congratulation once again...
3.6 The ‘Service Ofer’ Scam
In this scam type, scammers offer cheap services e.g. credit loans
for very cheap rates and with low interest rates:
We, GLOBAL FINANCIAL SERVICES Credit Union offers loan at a
very low interest rate of 3% per year […] Have you been turned down by
your bank? Do you have bad credit? Do you have unpaid bills? Are you
in debt? Do you need to set up a business? Worry no more as we are
here to offer you a low interest loan. Do not hesitate to contact us on
the telephone, fax and email address below for further clariication(s)
59
3.7 The ‘Next-of-Kin’ Scam
The victim is contacted by a bank representative, barrister or lawyer seeking someone to stand in as next-of-kin, in order to inherit
a sum of money from a deceased person:
However, it’s just my urgent need for foreign partner that made me
to contact you for this transaction; I got your contact from the professional data base found in the Internet Yahoo tourist search when
I was searching for a foreign reliable partner. […] I have the opportunity of transferring the left over sum of ($10.5 Million Dollars) that
belongs to late Mr Rudi Harmanto, from Indonesia who died along
with his entire family in the Asia Earth Quake (TSUNAMI, DISASTER
IN INDONESIA / INDIA. 2004, and since then the fund has been in a
suspense account. […] according to the laws and constitution guiding
this banking institution, stated that after the expiration of (10) years,
if no body or person comes for the claim as the next of kin, the fund will
be channel into national treasury as unclaimed fund. Because of the
static of this transaction I want you to stand as the next of kin so that
our bank will accord you their recognition and have the fund transfer
to your bank account. Hence, I am inviting you for a business deal
where this money can be shared between us in the ratio of 50/50. […]
3.8 The ‘Package Found’ Scam
In this scam attempt the scammer claims to be a representative from a logistics company or Homeland security oficial, who
contacts the victim because of an undelivered package that was
found at an airport freight station. In order to release the box and
discuss further steps, one should call a number:
[…] I am writing to you regarding on your abandoned consignment
box worth 4.5 million dollars...so kindly reconirm your full address,
Full name, Phone number,and nearest Airport.I wait for your urgent
and positive respond.you can call presidency oficer MR DAVID
BROWN Who is the incharge of releasing the box to me. Call +229
6833084 or email <zenithbanknigeria.
3.9 The ‘Refugee’ Scam
In the refugee scam a young women is seeking a person overseas
who can help her as a trustee to transfer money from the family’s
bank account. Parts of her family died in a plane accident, so she
also provides a link to a western news agency, where background
information about the tragic story can be read. She is now trapped
in a refugee camp where she has limited access to the Internet.
To get in contact with her, she shares the cell phone number of a
pastor she can trust:
60
My name is Miss Samira Kipkalya Kones, 23yrs old female and I
held from Kenya in East Africa. My father was the former Kenyan road
Minister. He and Assistant Minister of Home Affairs Lorna Laboso
had been on board the Cessna 210, which was headed to Kericho and
crashed in a remote area called Kajong, in western Kenya. [...] After
the burial of my father, my stepmother and uncle conspired and sold
my father’s property to an Italian Expert rate which the shared the
money among themselves and live nothing for me. [...] So I decided to
run to the refugee camp where I am presently seeking asylum under
the United Nations High Commission for the Refugee here in Ouagadougou, Republic of Burkina Faso. One faithful morning, I opened my
father’s briefcase and found out the documents which he has deposited huge amount of money in one bank in Burkina Faso with my
name as the next of kin. […] I am in search of an honest and reliable
person who will help me and stand as my trustee so that I will present
him to the Bank for transfer of the money to his bank account overseas. [...] the only person i have now is Rev Pastor. Godwin Emmanuel
(+226 777 458 10 ) Please you can get me though Rev Pastor Godwin
number Please if you call him tell him that you want to speak with me
he will send for me in the hostel, Kisses and warmest regards
3.10 The ‘Orphanage’ Scam
The orphanage scam involves NGO’s who seek monetary assistance for local orphans or orphanage institutions. By supporting
their causes one can help to build schools or libraries and support
their free-time activities:
Please join the efforts between Life and Death to change the lives
of children in Nigeria by supporting the orphanage call +234 <zenithbanknigeria […]
4 The development of ‘Let’s talk business’
After categorizing the scam narratives we proceeded to call the
scammers. Prior to calling scammers, we wanted to know what
means were necessary to stay anonymous and safe without leaving a trail that could lead to us. An interview from the ‘Area 419’
podcast series explained one method for setting up a connection
to a scammer. ‘Area 419’ was a popular radio podcast that aired
on a weekly basis between Feb. and Oct. 2010. (Area 419, 2010)
The podcast covers background stories of the scambaiting forum
419eater.com; advice on scambaiting, including interviews with
scam-activists and audio clips of phone calls with scammers. Podcast #2 includes an interview with a scambaiter called ‘SlapHappy’,
who talks about his experiences with calling scammers. He uses
a VoIP service and has a worldwide plan to call any landline for
61
free. When a scammer doesn’t fully trust him in an email conversation, he calls them to build up his trustworthiness. For him
it is hard to realize that there is a criminal talking on the phone,
trying to persuade him to pay money. Often, the poor connection
quality and the scammers’ thick accent make a conversation hard
to understand. He uses the ‘cold-calling’ method to call the scammer and improvises during the conversation.
Next a VoIP account was setup under this pseudonym including a worldwide landline-calling package. The Quick Time Player
software was used for recording the voices of the scammers.
Before calling the scammers we created a ictional persona with
name and country of origin. When a connection to a scammer
was established, the scammer was informed that the email was
received, but not all relevant parts fully understood, so the situation and the next steps should be explained to us once again.
Then the scammers had time to explain the situation and how we
should proceed further.
The installation consists of ive modiied SPAM-cans (see
Fig.1 [C]) that are normally used to store precooked ‘SPiced hAM’
produced by the Hormel Foods Corporation. According to Merriam-Websters dictionary, the naming of unwanted mass advertisement as ‘Spam’ originates from ‘the British television series
Monty Python’s Flying Circus in which chanting of the word Spam
overrides the other dialogue’. The sketch premiered in 1970, but it
took until the 1990s for mass emails, junk phone calls or text messages sent out by telemarketers to be called ‘spam’. (Templeton)
While most of the scam emails tend to end up in the SPAM folder,
we chose to mediate these stories through physical SPAM-cans.
Contact microphones and audio players are attached to four
of the cans, so that visitors can listen to the scammers’ different
narratives that were recorded. The ifth device has two buttons:
one button connects the visitor to a randomly chosen number
from a scammers database, the other button disconnects the call.
Next to the work is an information board providing instructions
for talking to the scammers. With the ifth can we want to provide
the visitor with an opportunity to be anonymously connected with
an scammer. This is an experience of being nervous about who
will answer the phone, trying to understand the narrative, and
judging whether one would fall for such an offer or not. By providing instructions to the visitor, we want to pass on some guidelines
and open questions that the visitor can ask the scammers. The
guidelines include ‘Play along to igure out the scam’, ‘Never tell
any personal information’ or ‘You are talking to criminals – still
they are humans! Open questions can help the scammers to tell
more about themselves or their schemes; ‘Tell me what do we do
next?’, ‘How can I trust you?’ or ‘Is this operation safe?‘. On a wall
62
next to the pedestal are two clocks indicating ‘Local’ and ‘Nigerian’ time (see Fig. 1 [A]). The best placement for the work is on a
50x50x130cm pedestal (see Fig. 1 [B]). Inside the pedestal there
is a computer with an Internet connection that ables the anonymous VoIP communication between the visitor and the scammer.
Fig. 1 Setup of the artwork
5 Conclusion
The ‘scammed.by’ database was found to provide valuable datasets that can be further analyzed for our purposes. It offers possibilities to categorize scam messages by scam type, country or
phone carrier, which offers interesting perspectives for further
investigations. When calling the phone numbers, we recognized
that not all phone numbers seemed to be in use and some phone
numbers appeared in several e-mails, even if the narratives or the
characters’ names were slightly altered. Through this experiment
we experienced that the phone conversations were very personal
in comparison to the emails: some scammers were very open to
explaining their shady businesses, others preferred to use email
and keep the phone conversation as brief as possible. Some of
the scammers used voice-morphing software to anonymize their
natural voices resulting in a disturbing effect. The conversations
with the scammers were recorded, and some of the stories were
edited and can be listened to through the SPAM-cans in the art
installation.
References
!Mediengruppe Bitnik, Opera calling, www.opera-calling.com
Area 419 – Scambaiting radio, www.blogtalkradio.com/area419
Art by telephone, www2.mcachicago.org/event/
chf-art-by-telephone-and-other-adventures-in-conceptualism/
63
Afshar, Brunnthaler, Schulze, ‘The evidence of things not said’, www.
derbeweis.at/
Colacello, Bob, Holy Terror: Andy Warhol Close Up. Random House LLC,
2014.
Costin, Andrei, et al. ‘The role of phone numbers in understanding
cyber-crime schemes.‘ Privacy, Security and Trust (PST), 2013 Eleventh
Annual International Conference on. IEEE, 2013.
Kac, Eduardo, ‘Aspects of the Aesthetics of Telecommunications.‘
International Conference on Computer Graphics and Interactive
Techniques: ACM SIGGRAPH 92 Visual Proceedings. Vol. 1992. 1992.
Rustad, Michael L., ‘Private enforcement of cybercrime on the electronic
frontier.‘ S. Cal. Interdisc. LJ 11 (2001): 63.
Stajano, Frank, and Paul Wilson, ‘Understanding scam victims: seven
principles for systems security.‘ Communications of the ACM 54.3 (2011):
70-75.
Maggi, Federico, ‘Are the con artists back? A preliminary analysis of
modern phone frauds.‘ Computer and Information Technology (CIT), 2010
IEEE 10th International Conference on. IEEE, 2010.
Mitnick, Kevin D, The Art of Deception. Wiley, 2002.
Lapsley, Phil. Exploding the phone: The untold story of the teenagers and
outlaws who hacked Ma Bell. Grove Press, 2013.
Smith, Terri, http://terricsmithitsforyouartandtelephone.blogspot.co.at/
Templeton, Brad, Origin of the term ‘spam’ to mean net abuse, www.
templetons.com/brad/spamterm.html
Young, Carey, ‘The representative’, www.careyyoung.com/past/
therepresentative.html
Digital Sensing:
The Multisensory Qualities
of Japanese Interactive Art
xCoAx 2015
Computation
Communication
Aesthetics
and X
Glasgow
Scotland
2015.xCoAx.org
Emilia Sosnowska
University of the West of Scotland
Keywords: Interactivity, Digital media, Device Art, Beyond
visual, Multisensory experience.
The paper presents examples of digital art in Japan and examines
its roots in traditional East Asian philosophy giving the senses
a prominent role in perceiving the world and enabling a perfect
symbiosis between humans and machines. The research relects
on the expansion of this culturally and traditionally inspired spirituality from its original context in the socio-cultural interpretation of the natural world to contemporary digitally mediated
environments. This is accomplished through analysis of digital interactive work by speciic artists located in Japan, such as
Kumiko Kushiyama, Masaki Fujihata and Ryota Kuwakubo.
65
Introduction
Technological development has a chief inluence on the way the
world functions and how it is being perceived. Consequently, it
has an impact on the way art is produced and plays an invaluable role in the future of art and contemporary cultural debate.
The aim of this paper is to demonstrate the role of evolving new
technologies, in extending human senses well beyond their traditional deinition. It focuses on the nature of multisensory processes in interactive multimodal art and demonstrates an urgent
requirement to develop and deploy innovative hybrid methodologies which relect and come to terms with innovation, hybridity and complexity of the artworks in question. Walter Benjamin
notes that human sense perception
“changes with humanity’s entire mode of existence. The manner in
which human sense perception is organised, the medium in which it is
accomplished, is determined not only by nature but by historical circumstances as well” (Benjamin 1936).
Today, the media are both concerned with engaging with the
array of human senses to the extent that they are largely based on
the very concept of sensory language. While technology is by no
means the only factor inluencing patterns of perception, its speciic application leads to the reorganisation of our sensory perception and draws one’s attention to the sensual reception of digital
interactive art. As a result of various techniques used by artists,
works of digital art communicate via different senses and represent a range of different embodied experiences. Consequently, it
is urgent to abandon visual determinism in the era of multimodal
art created within it, taking into account not only the sense of
vision, but also all the other senses.
Sensory research
Despite the fact that in the early 1990’s the focus of cultural studies shifted towards an anthropology of the senses, this area of
social debate has still not taken into account the technological
and scientiic circumstances of artistic creativity. This reaction
to the much vaunted “visual turn” aimed to transfer the foci of
social sciences to more sensually versatile spheres. Among others, Classen and Howes assert that every society has its own sensory codes and patterns and its individual sensory order is manifested in material culture as artefacts, which accordingly should
be examined with an appropriate attention to hybrid forms of
art, culture, and media. Referring to McLuhan, “tactility is the
66
interplay of the senses, rather than isolated contact of skin and
object” (1967[64], 335), which is why it should be investigated and
treated like one among other human senses. As long as contemporary culture rejects or underestimates the multisensory nature
of human perception, the challenge to develop an adequate theoretical framework which can come to terms with technologically
focused media art and culture is almost insurmountable.
The subject of multisensory experiences has been considered
in an approach akin to cultural anthropology. Constance Classen
scrutinises sensory perception as a cultural, as well as a physical
act, noticing that tactile sensations, similar to any other occurrences of our sensory nature convey cultural values on top of
purely physical attributes. In fact, the world is perceived through
the senses, and sensory perception is mediated by the cultural
construction (1997). Furthermore, Karen Cham (2009) states that
aesthetic value is culturally coded. Accordingly, since perception
is traditionally connected with human evolution, the fact that
present-day culture is centred on sensing and perceiving reality
through a range of equally important senses cannot, and indeed
should not be ignored. This, in consequence, can induce alternative methods of dealing with sensory perception of digital art
within the visual or alternatively multisensory\digital culture
theory. In interaction, the physical engagement and a certain
sensation become involved in action. In the era of digital culture
we now inhabit, the hierarchy of the senses shifts and auditory or
tactile experiences are becoming equally important as sight in the
process of communication, connectivity or perception. As Marshall McLuhan stated nearly 50 years ago: “we had extended our
bodies in space” (1964, 3). Taking this into account one can clearly
see the effects of this expanded force of perception in today’s
world, not only in novel military applications, but also in everyday
devices such as vibrating mobile phones, e-books, among other
tangible tools; or distinct examples from the world of art, such as
Tenori-On by Toshio Iwai, or Bitman by Ryota Kuwakubo. Nonetheless, clear boundaries between individual senses are blurring
into one sensory and versatile experience.
Human perception embodies a fast and constantly active processing of multiple present sensory modalities. For many years
psychologists have undertaken exhaustive research in this area
(Gregory 1970, 1974; Gibson 1966, 1972); however our knowledge
of the signiicance of new means of science on sensual perception
of art is still to be vastly expanded. In this context Laura Marks
considers sensual experience and multi-sensory perception in
compilation of essays, focusing on the notion of ‘haptic visuality’ in ilms and video works. Other works relecting on sensual
research oscillate in the historical and anthropological scope of
67
art (Classen 1993, 1994, 2005). Further works elaborating on sensoriality, in relation to various human senses are being conducted
in other areas of applied research, such as ilm study and visual
anthropology (Grimshaw, Ravetz, 2005). The matter of sensual
perception has also been an issue for the sociology of the senses
as it has been initially proposed by Georg Simmel (1924; 1997). In
his essay Sociology of the senses, Simmel not only argues about
the meaning of sensory perception and its inluence on social life
and human coexistence, but also about its enormous inluence
on human interaction. Intellectuals, as for example the psychologist Gibson who worked on the ecological theory of perception
(Gibson, 1966, 1979), tried to question the body and mind dualism by treating the body as not only a source of experience but
of knowledge as well (See Lera Boroditsky and Michael Ramscar
research, 2002). Therefore the paradigm of this embodiment designates an integration of body and mind. Consequently the notion
of embodiment, to some extent, changed the approach to body
and mind duality. Many contemporary theoreticians try to challenge the domination of the sense of vision; as David Howes calls
it “the hegemony of vision in WESTERN CULTURE” (2003, Foretaste XII). Furthermore, he continues, that “this dominance is
primarily due to the association of sight with both scientiic rationalism and capitalist display and to the expansion of the visual
ield by means of technologies of observation and reproduction”
(Ibid.). While the text focuses on analyses the multisensory experience, it also contributes to the debate on sensual anthropology
in investigating the means of perception in the ield of interactive
installation art. Although for many centuries the sense of sight
has continually been favoured, the importance and signiicance
of other senses is surely undoubted. Furthermore, apart from the
fact that human perception is based on the multisensory experience, where all the senses play an equally important role, sensory
perception remains a vital element of all aesthetic experiences
(DeWitt H. Parker 2004, Hekkert 2006). Accordingly, most of the
human knowledge has its beginning in the sensual realm. In other
words, one could say that every experience starts from the senses,
as the sensory organs serve as receptors through which human
beings are able to know and feel the external world.
Interactive Art in Japan
As the ‘Device Art’ movement originated in the land of the rising
sun, this paper focuses exclusively on Japanese artists. The presence of the Japanese artists related to the new media scene has
been evident since the growth of the digital media environment.
Major international festivals in Europe, the most established being
68
Ars Electronica in Linz, Austria or SIGGRAPH organised across
United States and Asia, are the main events bringing together
digital media artists and theorists from the Western, as well as
the Eastern corners of the world. The potential of art created in an
Asian context is acknowledged in works by various new media art
scholars, such as Lev Manovich (2003), Marshall McLuhan (1998),
or Ryszard Kluszczynski (2010). While Manovich observes that
Japan has a strong voice in digital media debate, McLuhan refers
to the Japanese approach to technology inspired by Zen Buddhism,
and Kluszczynski elaborates on an instrument strategy, relating it to ‘Device Art’. Accordingly, the names of artists, such as
Toshio Iwai or Masaki Fujihata are internationally renowned and
recognized also outside Japan. Paradoxically, media art discourse
relects almost exclusively on the Western perspective. There has
been relatively little theoretical debate, with literature in the ield,
limited to several names and writings. As such, one of the most
prominent voices in the discourse belongs to Machiko Kusahara
(2001), Japanese artist and new media art curator, philosopher
Hiroshi Yoshioka (1997), Tomoe Moriyama (2006), or Mauro Arrighi (2011). In short, Kusahara introduced the notion of ‘Device
Art’ to the international art world, Yoshioka comments on the
concept of media art in the West and Japan, Moriyama shares her
perspective on Japanese media art scene, and Arrighi contributes
to a debate on Japanese media art and animism.
This study sets out to examine the interface between digital
creativity and human sensory features enacted in interactive artworks in this culturally speciic context. The following section
provides a brief overview of relevant Japanese ritual and spiritual
belief, making reference to Japanese thinkers such as Machiko
Kusahara and Tomoe Moryama among others, in order to demonstrate the signiicant impact of such perspectives on a potential
aesthetic reading of digital media art from and in East Asia and
Japan in particular.
Tradition
Outlining the background for artistic practices adopting technology and interactive interfaces, and considering their historical
background, must acknowledge the unique character of Japanese
philosophical thought. Japanese culture with its foundation in
a broader East Asian philosophical tradition, giving priority to
a monism of body and mind connected with Zen Buddhism and
Shinto ideology, fully recognises the importance of the senses
in engaging with and perceiving the world. Eastern philosophy
implies that between the subjects and the object, as well as mind
and body, exists a relationship which induces harmony, and the
69
human being is treated as a complete organism, uniied in mind
and body. By means of interactive technology certain aspects of
art have gone through a transformation. To name but a few shifts:
new tools have been developed, creative collaborations have
brought together art and technology. Artists and engineers in
Japan often team up with major electronics companies like Canon
and use their funding to implement their prototypes. Despite
this modern industrial and commercial setting the inluence of
Shinto on current Japanese art is clear and is conirmed by artists
and intellectuals who declare their inspiration deriving from the
body of Japanese traditions comprising Shinto belief. Academics
such as Moriyama, Kusahara, or Yoshioka stress this notion in
their writings concerning contemporary research in art (Moriyama 2006, Kusahara 2001, Yoshioka 1997). Shinto belief has its
foundation in the ancient heritage. This tradition of Japan asserts
the “existence of spiritual life in objects or natural phenomena
called mi (the god) and tama (the spirit)” (Kitano 2006, 1). Many
theoreticians of contemporary art in Japan refer to Shintoism as a
major inluence on Japanese sensibility (Arrghi 2011, Kitano 2006,
and more tentatively Kusahara 2013). This applies to the natural
world and inanimate objects and devices which are after all, at
least at a molecular level, made from natural elements and substances, metals, and even plastics which come from hydrocarbons
formed from organic materials. This holistic view of life also prevails in the perception of artefacts. Consequently, the majority of
Japanese religions eschew notions of dualism and embrace elements of animism (a view that sees spirit in every component of
the world, not only human beings). Referring to Kenny KN Chow
(2012), I suggest that this culturally distinct and traditionally
inspired spirituality is transposed from natural or social environments to a present day technological environment of sophisticated multimedia devices and tools used widely in interactive art
and digitally mediated environments. Moreover, in such multimodal environments, interaction enables and promotes multisensory experience and intersects with the luidity of aesthetic experience. All of the examples discussed refer to physical engagement
with an artwork and multisensory experience is often an integral
part of artistic creations.
Japanese electronic art and Device Art
movement
Technological development has always been a great inspiration
for human beings, a factor of progress, and a possibility for new
experiences. As McLuhan states: “(o)ur new electric technology that extends our senses and nerves in a global embrace has
70
large implications for the future of language” (McLuhan 1967,
80). In Japan, extensive research on novel technologies and their
creative use has among other things resulted in the notion of
‘Device Art’. Hiroo Iwata, engineer and artist initiated Device
Art Project in 2004. The idea for the title of the project ‘Device
Art’ derived from Iwata’s research activities and blending media
art and interactive technologies. The name itself was inspired
by Ryota Kuwakubo – discussed below – who once called himself a device artist. ‘Device Art’ has evolved into a new artistic
movement at the leading edge of cultural and creative thought
in Japan and elsewhere. Its principal characteristics indicate the
use of mechatronic devices, new materials and the convergence
of innovative technologies, and new ideas in art and design. Artists using everyday components assemble them together with the
most recent technologies creating artworks in the form of devices
(Kusahara 2008). The concept of ‘Device Art’ is conceived as a
modern take on traditional Japanese culture. In line with the critical perspective offered by Kusahara (2008) and Arrghi (2011), the
above concept reconsiders relationships between science, technology and art, taking into account historical as well as contemporary perspectives. As such, this form of media art combines art
and technology with popular culture, design and playfulness. An
open minded, ludic approach to creativity allows these artists to
engage with developments in new technology and the possibilities given by new materials as well as acceptance of the very luid
border, or rather lack of it, blends amusement, art, technology,
design and popular culture. As we will see in the analysis of work
by Ryota Kuwakubo this approach is well developed in Japanese
art. Furthermore, Mauro Arrihgi has taken a similar stance. The
artist and researcher argues that the religious basis present in
Japanese culture are an essential element which underpins the
development and popularisation of new media art in its speciic
forms of hybrid art and Device Art in Japan (2011). Theoreticians,
such as Mariyama (2006) and Kusahara (2001) among others,
repeatedly refer to the historical foundations on which their work
is built, including ancient religious beliefs, folk culture and linguistic structure, as a signiicant determinant of the characteristic approach these artists have developed.
While some of the artists use new technologies as tools assisting in their project design, for others the new technology itself
is a new medium carrying aesthetic values and enriches participants’ experience. Nevertheless, what all of them share is that
they represent a current of digital art which routinely deploys
multimodal technological devices. All of the works presented
here serve as practical examples of the potential for embodied
experience and multisensory engagement with an artwork. They
71
illustrate particular aspects of interactive artwork, such as individual artistic strategies used in their production, the processes
involved, modes of participatory engagement, and the potential
avenues and opportunities they present for the bodily experience.
Sensory engagement in interaction
Each of the works further discussed in this paper demonstrate a
variety of applications and critical standpoints. Yet, they all constitute the extensive array of artistic representations which in
their aesthetic perspective take into account human senses in art
perception. The range of artists presented in this paper ranges
from those who started experimenting with technological devices
in the early 80s – like Masaki Fujihata, to those who are speciically
focused on a particular sensory feature - the sense of touch – like
Kumiko Kushiyama, to Ryota Kuwakubo, for whom multimodality of an object determines spectators’ or users’ multi-sensory
experience, and who exempliies the ‘Device Art’ movement. All
of these artists challenge the classic artist-artwork-audience relationship and are part of the notion of a paradigm shift from the
art object as something to be observed or hung on a wall to something to engage with directly through multi-sensory experience
and aspects of embodiment. It is also appropriate to characterise
two different sides of the bodily engagement: the body, as a foundation of immersive experiences and processes of cognitive interpretation of this interaction and immersion (Siemanowski 2010).
Ryota Kuwakubo
Fig. 1 Ryota Kuwakubo, Nicodama,
2009, Courtesy of the artist
is a digital artist and a pioneer of Device Art movement. At the
beginning of his career he had been working on electronic toys.
The artist’s initial fascination was chiely related to actuators and
sensors. The earliest works that he created were not considered
art objects, but rather electronic devices and games. Throughout
the years his creative explorations have led to projects which have
more critical and questioning character, but nevertheless remain
grounded in the basic concepts of ‘Device Art’. The work examined here, Nicodama (2009) is an interface in a form of two half
spheres, resembling hyper-realistic eyeballs. The participant is
invited to lift the ‘eyeballs’ and place them freely in the space. The
only limitation in order for the work to function is that both small
hand held devices need to be arranged in a straight line – just like
ordinary eyes.
The work is comprised of a small transceiver based on an infrared principle. On account of a magnetic mechanism installed at
the back of the work, Nicodama can be aligned anywhere in the
72
space. To interact with the artwork the devices need to be held and
actuated accordingly by the participant, giving the user control
over an animated creature, and enabling physical engagement. In
the text accompanying the Device Art exhibition the artist refers
to Japan’s historical past and that people
“felt [that] each of the objects around them had a spirit, and treated them
with respect and care. Today we share a more objective and scientiic
approach in seeing things. While there is no doubt that it is important
to maintain this attitude, the capacity for empathy is equally important”
(Kuwakubo 2009).
The blinking pair of ‘eyes’ reacts to the participant’s engagement
and the work can only operate and indeed exist fully through this
physical interaction. This artwork, along with some other works
by Kuwakubo have been developed and made available as a commercial product, proving to be one of many examples of blurred
boundaries between art and entertainment in Japanese culture.
Production of gadgets and toys by artists is a common practice in
Japan (Kusahara 2006). This is a two-way process including those
digital devices as exhibits shown within art galleries as well as
being mass produced by large companies and exhibited during art
fairs or in commercial spaces. When interviewed about his work
Kuwakubo declared that multisensory experience of the recipient serves as one of his inspirations, whether the work deploys
complex software or simple manual mechanism (Kuwakubo
2013). Throughout his career Kuwakubo’s inspiration and artistic
practice have evolved and slowly started to concentrate on the
behavioural side of an artwork and the spectator’s daily relationship with electronic devices. A continuing concern with how people react to innovations in technology is what often motivates his
creations. The artist is interested in the whole bodily experience
and multimodality of it; by his reckoning, no aesthetic experience
can be removed from its multisensory aspect. The experiences
being provided by technologically aided artworks lie at the core of
his interest in establishing communications between the people
and machines. This physical engagement in order to complete the
artwork is at the core of the concept.
Kumiko Kushiyama
is another example of an artist who combines an interest in the
human body with technologically aided objects and machines.
Her engagement encapsulates all stages of creative development,
from coming up with ideas to designing and engineering completed, leading edge artworks and devices. Kushiyama uses hybrid
73
practices and fuse elements of science, engineering and ine art
practice. In the early 2000’s her works began to oscillate predominantly between the sense of touch and different qualities of haptic
experience. From the year 2003 she started developing and using
tactile displays, focusing directly on a tactile interaction. The
work exemplifying this research is called Thermoesthesia (2006).
In order to provide a whole spectrum of sensory stimuli when
touching the artwork, and to give the recipient a real sense of temperature occurring in the natural world, Kushiyama uses original
thermal sense-displays. This enables her to create installations
which not only give the possibility to interact by touching the
surface of an interface, but also to sense other haptic qualities of
the given piece, such as its temperature. By adding actual thermal
properties to the images representing warm or cool substances
the artist tries to recreate all the sensory features as faithfully as
they occur in everyday life.
Fig. 2, 3 Kumiko Kushiyama,
Thermoesthesia, 2006, Courtesy
of the artist
Thermoesthesia gives the recipients the opportunity to touch
the work and experience the physical tactile engagement and also
to interact directly with the images being part of the artwork. The
displayed imagery ranges from leafy visualisations with warm
toned loral patterns to cold ice crystals and snowlakes in wintry
whites. The interaction with these simulated physical phenomena allows recipients to experience the nature occurrences in a
different manner, in an artiicially created environment, which
resembles the natural one. As such, if the image represents cool
temperature, the touch sensation feels cool as well. The intention of Kushiyama is to provide the opportunity for rediscovery
of the world as we know it in the immediate embodied engagement with the work (Kushiyama 2006). The artwork encourages
the playful exploration of perception processes through haptic
interaction between computer generated images and participants.
The work represents an attempt to engage the recipient in sensory immersion. What inluences perception of this artwork is
the fusion of the aural and tactile in the form of gentle sounds,
mechanical touch and the temperature recognition. The artist
ascribes her inspiration to engineering, new technologies as well
74
as development of robots, humanoids and virtual reality in Japan.
She uses all of these elements as basis for her own ideas as well
as means of implementing her creative concepts. Paradoxically,
these artworks enabled by sophisticated engineering and technological developments, based on a physical contact with machines,
stimulate immediate bodily contact and awareness of its sensory
modalities. When thinking about prevailing and broadly Western approaches to art and art history, Kushiyama notices the
importance of historical context, social and theoretical background which always appears to be an essential part of the artistic
debate. In Japan on the other hand, interactive art and media art
in general seeks light and entertaining engagement with engineering and technological developments. It relates to the social
background and the way art is valued and what is regarded as art
around the world.
Masaki Fujihata
Fig. 4 Masaki Fujihata, Beyond Pages,
1995, Courtesy of the artist
1 Interactive art is understood as
art involving participant’s bodily
engagement and the giving the user
sense of control and a power
of creation.
is considered one of the irst artists who contributed to instigation and establishment of interactive digital art within the framework of contemporary art in Japan. His installation Beyond Pages
(1995-1997) remains the most recognisable and iconic of his oeuvre and digital interactive artworks in general, as it serves as a
classic example of interactivity in art.1 Beyond Pages is a digital
interactive installation created to it into a small darkened room,
in which the real interweaves with the virtual. A desk, a chair and
a lamp are the actual objects in the space and a book (the actual
haptic tablet) lying on the desk, is an interface between a human
and a computer.
The illustrated, virtual book, just as any other, contains words
and visual images. Pictures of leaves, an apple, a stone, a door,
a lamp switch, an hourglass and a simple text, can be browsed
through and animated with the use of a special pen - a wireless
electronic device. All of this is presented as an assembly of digital
images in conjunction with acoustic signals. In this work, Fujihata deals with the fusion between the real and the virtual, combining actual objects in the room with an interface and a digital
projection. As a result, he creates a coherent yet hybridised environment where physical objects are blended into an imaginary
world of the artist. Beyond Pages requires human touch as well as
involvement of the other senses. The interactive and multimodal
qualities of the work enable an embodied approach to the work.
Wielding an instrument which employs tactile and empowering
sensation allows the participant to engage with the piece bodily
and initiates auditory and visual sense perception. The experience is further dependent on implemented technology and digital
75
representation of the sensations and human experience. Participants are making sense of the works and experimenting with the
medium using information processing systems in the form of an
interactive book.
Conclusion
As presented in the above examples, interactive media can enable
humans to externalise the whole central nervous system and
engage physically with an artwork. The human physical body is
treated as a whole and as such takes part in the aesthetic experience. Through interaction and embodied perception participants
are able to observe and examine the space around them and perceive it in the most natural way with the aid of human multi-sensory properties, as well as by interaction between man and the
machine. Moreover, every time that a human being takes part in
the exploration of shared phenomena through these works created
and operated by sophisticated integrated technologies, they do so
from a subjective perspective of the participant. As such, the work
is unique and in some sense is created anew. Each medium has
some assigned qualities to it and each of the media approached
differently has a particular effect on the human perception.
Objects created by Fujihata exist as communication tools and are
determined by the individual human sensorium and approach to
interactivity. Like all the works analysed here, Fujihata’s artworks
do not convey aesthetic meaning unless they are being activated,
explored and perceived sensorially by participants. The role of the
spectator is to participate physically in the art piece and explore
its interactive potential - something essentially dependent on
implemented technology and digitally recreating human sensations. Participants are making sense of the works, and experiment
with the medium. They use information processing systems in
the form of interactive installations and objects. In conclusion, an
effective critical understanding of particular artistic approaches
to multisensory perception should be taken into account when
investigating sensory relations in perception of multimodal art.
The implications of new technologies or notions of engagement
should encourage exploration of culturally rooted creative practices and acknowledge sensory features of human body in reception of artefacts. This paper only points out more effective ways of
thinking about global change in communication, perception and
awareness of expanded sensuality and it should be considered as
a starting point to further investigation.
76
References
Arrighi, Mauro. Japanese Spell in Electronic Art, CreateSpace
Independent Publishing Platform USA, 2011.
Berque, Augustin. Some traits of Japanese Fudosei. The Japan
Foundation Newsletter XIV (5):1-7, 1987.
Benjamin, Walter. The Work of Art in the Age of Mechanical
Reproduction, New York: Schocken Books, (1936) 1969.
Boroditsky, Lera & Ramscar, Michael. The roles of body and mind in
abstract thought. sychological Science 13 (2): 185–188. 2002.
Calza, Gian.C. in: Arrighi, Mauro, Japanese spell in Electronic Art. Kindle
Edition. 2011, Accessed June 2011. 2007.
Cham, Karen. Digital Visual Culture: Theory and Practice, ed. A.
Bentkowska-Kafel, Intellect books Bristol, 2009.
Chang, A & Sullivan O, C. An audio-haptic aesthetic framework
inluenced by visual theory, In Framework, 2008.
Chow, Kenny. KN. Toward Holistic Animacy: Digital Animated
Phenomena echoing East Asian Thoughts, Animation 2012: 7, 2012.
Classen, Constance. Foundations of the anthropology of the senses,
International Social Science Journal Volume 49, Issue 153, pages
401–412, September 1997
Daniels, Dieter. Strategies of interactivity, in: The Art and Science of
Interface and Interaction Design, Sommerer, CH et al. Springer-Verlag
Berlin Heidelberg, 2008.
Dewitt H. Parker. The Principles Of Aesthetics. Kessinger Publishing,
2004
Howes, David. Sensual Relations. Engaging the Senses in Culture &
Social Theory. The University of Michigan Press, 2003.
Huhtamo, Erkki. Twin-Touch-Test-Redux: Media Archaeological
Approach to Art, Interactivity and Tactility, in: Media Art Histories.
Edited by Oliver Grau, Cambridge, Mass: The MIT Press, 2005.
Ishii, Hiroshi. http://tangible.media.mit.edu/ Accessed February 2014
Eco, Umberto. The Open Work, trans. Anna Cangogni, Cambridg, MA :
Harvard University Press, (1962) 1989.
Eisenstadt, Shmuel, N. In: Asian Perceptions of Nature: A Critical
Approach, eds. Ole Bruun and Arne Kalland, 48–62. Nordic Institute
of Asian Studies, Studies in Asian Topics, No. 18. Surrey: Curzon Press,
1995
Fujihata, Masaki. Interview with Emilia Sosnowska, 2013.
Gibson, James. J. The Senses Considered as Perceptual Systems,
Houghton. Miflin Company, Boston, 1966.
Gibson, James. J. A Theory of Direct Visual Perception. In J. Royce, W.
Rozenboom (Eds.). The Psychology of Knowing. New York: Gordon &
Breach,1972.
Gregory, Richard. The Intelligent Eye. London: Weidenfeld and Nicolson.
(1970).
77
Gregory, Richard. Concepts and Mechanisms of Perception. London:
Duckworth, 1974.
Grimshaw, Anna and Ravetz, Amanda (eds.). Visualizing anthropology.
Bristol and Portland, OR: Intellect, 2005.
Hekkert, Paul. Design aesthetics: principles of pleasure in design,
Psychology Science, Volume 48, 2006 (2), p. 157 – 172.
Kitano, Naho. Animism, Rinri, Modernization; the Base of Japanese
Robotics. In: ICRA, 07 IEEE, International Conference on Robotics
and Automation, Rome, Italy, April 10 –14. www.roboethics.org, 2006,
Accessed April 2014.
Kluszczynski, Ryszard. Strategies of interactive art, in Journal of
Aesthetics & Culture, Vol. 2, 2010 DOI: 10.3402/jac.v2i0.5525, 2010.
Kusahara, Machiko. Being Japanese/Being Universal- Japanese
Contemporary Media Artists and the Presence of Cultural Heritage,
Kobe University (Originally published in Art, Asia Paciic, 2000. This
is a new version, 2001, to be published in Poland in 2002) Accessed 20.
November.12 http://www.f.waseda.jp/kusahara/beingjapanese.html,
2001.
Kusahara, Machiko. Intelligent agent Vol. 6 No. 2, Special Issue: Papers
presented at the ISEA2006 Symposium, Available online and Print-onDemand at http://www.intelligentagent.com, 2006. Accessed February
2014,
Kusahara, Machiko. Digital by Design Ed. Troika Thames and Hudson,
Device Art? Media Art Meets Mass Production, http://deviceart.vrlab.
esys.tsukuba.ac.jp/Kusahara-digitaldesign.php#fragment-12h 2008,
Accessed 02 February 2014
Kusahara, Machiko. Device Art: A New Form of Media Art from a
Japanese Perspective, 2002.
Intelligent Agent, Accessed on 15 December 2013.
Kusahara, Machiko. Interview with Emilia Sosnowska, 2013.
Kushiyama, Kumiko et al. Thermoesthesia: About collaboration of an
artist and a scientists. SIGGRAPH’06 Proceedings, New York, 2006.
Kushiyama, Kumiko. Interview with Emilia Sosnowska, 2013.
Kuwakubo, Ryota. Catalogue text: Device_art 3.009 http://www.
kontejner.org/video-bulb--nicodama-english 2009, Accessed on 12
November 2013
Kuwakubo, Ryota. Interview with Emilia Sosnowska, 2013
Laurel, Brenda. Computers as theatre, Addison-Wesley Longman
Publishing Co., Inc. Boston, MA, USA, 1993.
Manovich, Lev. New Media from Borges to HTML - Introduction to The
New Media Reader, edited by Noah Wardrip-Fruin and Nick Montfort,
The MIT Press, 2003.
Marks, Laura. Touch: Sensuous Theory and Multisensory Media,
University of Minnesota Press, 2002.
78
Masao, Yamaguchi. Karakuri: The ludic relationship between man and
machine in Tokugawa Japan, in: Japan at play: the ludic and the logic of
power / edited by Joy Hendry and Massimo Raveri, Routledge, 2002.
McLuhan, Marshall. The Global Village: Transformations in World Life
and Media in the 21st Century, Oxford University Press, 1998.
McLuhan, Marshall. Understanding Media, London sphere books,
1967(64).
Moriyama, Tomoe. Curating Digital Media-Next Generation of Japanese
Media Art & Exhibition. IV 2006: 664-670, Accessed on 12 December
2012, 2006.
Robles-De-La-Torre G. Principles of Haptic Perception in Virtual
Environments. In: M. Grunwald (Ed.), Human Haptic Perception: Basics
and Applications, Basel: Birkhäuser Verlag, 2008.
Simanowski, Roberto. Event and Meaning: Reading Interactive
Installations in the Light of Art History. In: Beyond the Screen:
Transformations of Literary Structures, Interfaces and Genres, edited by
Jörgen Schäfer, Peter Gendolla, transcript Verlag, Bielefeld, 2010.
Simmel, Georg. Sociology of the Senses, in: D. Frisby & M. Featherstone
(eds.) Simmel on Culture. London: Sage, 109-120, 1997.
Wargo, Robert. J.J. Japanese ethics: Beyond good and evil, Philosophy
East and West 40 (4):499-509, 1990.
Yoshioka, Hiroshi. The Present Tense of Thought: Complex Systems,
Cyberspace, and Affordance Theory, Published in Japanese, 1997.
Intermedia, an Updated
Vision in the Early
Twenty-First Century
xCoAx 2015
Computation
Communication
Aesthetics
and X
Glasgow
Scotland
2015.xCoAx.org
André Rangel
[email protected]
Keywords: Art, Design, Presentative Art, Autopoiesis,
Indisciplinarity.
What is it that characterizes the practice of intermedia thought?
Is intermedia thought an actual phenomenon, or is it already
out-of-fashion? Before the apparent lack of peer’s consensus as
to the actuality, terminology, and meaning of the term ‘intermedia’, this article presents the main characteristics and antecedents of intermedia thought and practice previously identiied by
other authors and, based on an actualized study, expands the
set of these attributes, proposing the inclusion of the characters
presentative and indisciplinary. An ampliied review of the literature allows us to support a philosophical approach that demonstrates the actuality and appropriateness of Coleridge’s original
conception of the ‘intermedium’, as well as the afinity between
intermedia and experimental chemistry, given its transformative,
laboratorial, and experimental character, and its capacity to create new media through the fusion of existing media, in an open
autopoietical process.
80
1 Introduction
The aim of this study is to substantiate and assert the inluence
and emergence of an intermedia practice and thought at the beginning of the XXI century. The present dispute about the extemporaneousness1 or actuality of intermedia practice and thought, and
about the meaning2 of the term itself, as a common denominator
of the results of several happenings, practices and thoughts in the
art and design ield, were taken to be a problematic worthy of this
paper. This paper summarizes theorization about antecedents and
concepts that support an approach to a deinition of intermedia.
By relating Higgins, Coleridge and Chemistry, we approach the
original conception of the term ‘intermedium’ as it was proposed
by Coleridge, and also its adaptation by Higgins. We afirm an
afinity relationship between intermedia and experimental chemistry, emphasizing its transformative, laboratorial character, and
its synthetic ability as a generator of new media. We identify serendipity, indetermination, and autopoiesis as inherent characteristics of intermedia and some scientiic practices.
We clarify the distinction between intermedia and multimedia
concepts, arguing from the relational and plural characters that
respectively associate with each of them. We discuss changes in
artistic movements that were inherited by intermedia, using several examples: in Futurism, the indecency concerning the use of
any mean or of the public reactions; in Dadaism, its heterogeneous
strategies and its spontaneity; in the Ready-Made, the media liberation from their original functions; in Happenings and some
John Cage’s work, the inclusion of the audience and indetermination. We identify the concepts of threshold, hybridism, holism,
continuity or fusion, as intermedia attributes that justify for its
non-categorization.
It is the aim of this text to demonstrate intermedia conception
as a process of indisciplinary creation, a process that generates
new hybrid media through the systematization and development
of fusion models from different expression media.
2 About Intermedia
1 Authors such as Claudia Gianetti
(2010) consider the intermedia as
’linked with the productions and
theories of the seventies and from
this point of view is a bit outdated.’
2 There is a lack of consensus when
several individuals answer the
question ‘what means intermedia’.
Answers available at: http://
hyperinstrument.com/interviews/
2.1 Higgins, Coleridge and Chemistry
According to Kostelanetz (1999), Higgins made quite clear the
view that there can be no limits to creative activities. Higgins
(1966b) used the term intermedia to categorize art works that
seemed to him to be found between media. Schneider (2000) considers that Higgins, by describing – with the term intermedia – art
works that operate in the interfaces of established media and
81
3 In an interview with Zurbrügg,
Higgins remember what he wrote in
his ‘The Something Else Manifesto’:
‘Whatever the other people are doing,
I’ll do something else.’
in the interstices between art and life, anticipated post-modern
preference by hybridism instead of formal unity, as well as the
challenge of art as a pure ontological category.
The term intermedia was used the irst time in 1966, in ‘the
something else newsletter’, with Higgins’s attempt to distinguish
Marcel Duchamp’s from Picasso’s work, as he tried to show that
Duchamp’s work was truly ‘between media, between sculpture
and something else’. We can speculate on the name itself of the
newsletter – ‘the something else’3 – but it is more interesting to
stress the recurrent use of the word ‘between’ in Higgins’s text.
The preix ‘inter’ is sustained, throughout Higgins’s text, by the
word ‘between’. We will then consider the term ‘between’ as a
keyword: to work ‘between’ several disciplines, between several
kinds of knowledge and between several experiences.
The word ‘between’ helps us to understand and contextualize the works resulting from intermedia practice and thought.
Those works it somewhere in a hybrid area, between Visual
Art, Sound Art, Architecture, Design, Performance and Science,
amongst other ields. The preix inter, from the Latin, is related
to and is a synonymous of the word ‘between’. So, a irst etymologic interpretation of the term would translate intermedia as
‘between-mediums.
A century before Higgins, Coleridge had already introduced, in
his work ‘Biographia Litteraria’, from 1817, the term ‘intermedium’.
In an interview given to Nicholas Zurbrugg, Higgins (Zurbrugg &
Higgins, 1963, p. 24) conirms that he renewed Coleridge’s term.
In this paper, we pretend to explore the notion of intermedia as a
‘transformative chemical agent’. Coleridge is quite clear: “(…) an
intermedium of afinity, a sort (…) of mordaunt (…)”. The intermedium is a kind of ‘mordaunt’. In spite of the fact that we no
longer ind the word ‘mordaunt’ in modern dictionaries, Doctor
Alice Eldrige informs us: ‘I think it’s an old fashioned technical
chemistry term meaning a common base – like a common ancestor.’ (Eldridge, 2013) Thus, we propose the concept of ‘catalyzer’
as an inferential approach to ‘mordaunt’’s meaning. As a catalyzer,
Coleridge’s ‘intermedium’ establishes an initial afinity between
intermedia and a chemical agent.
The afinity between intermedia and chemistry is corroborated
by Sumich (2006) comparison between intermedia and experimental chemistry. Chemistry, as it is deined by the ‘Collin’s English
dictionary’, is ‘(…) the branch of physical science concerned with
the composition, properties, and reactions of substances; (…)’.
Eisenkraft (et al., 2006) states that the artists are chemists,
for they ‘study and understand the properties of speciic materials – media – and ind ways to explore those properties’ in order to
express themselves. According to this author, chemistry resumes
82
to change, and artists become chemists through their need to
understand the materials between the materials they use. The
understanding of the properties and characteristics of materials
has been a fundamental component of artistic production since
antiquity. Levere (2001) entitles his ‘Chemistry History’ as ‘Transforming Matter’, and he summarizes the technical competencies
of chemistry applied to the manipulation, separation, combination and modiication of different substances. These competencies are much alike to those practiced by intermedia artists and
designers. Gardinali (2012) explores the relation between chemistry and art and states that since the discovery of ire, artists
have made a creative and exhaustive use of the media which were
available to them. It is consensual that both artists and chemists
activity transforms.
Moody (2000) refers a mutual relation between materials and
artists, and he states that both ‘artists and chemists deeply value
personal interaction and experimentation with materials’; or, in
other words, the laboratorial component that allows the growth
of understanding about the media. Glusberg (1980) already considered intermedia as ‘an unusual laboratory, though, where
technical and communications media are the guinea pigs’. The
afinities between chemistry and intermedia are made visible
in the fact that both are essentially processual, laboratorial and
transformative, which enables the unexpected, serendipity and
indetermination.
Serendipity may be one of the major drives for the artists and
chemists in their work processes. Spector (2003) states that it is
chemistry itself that seduces the chemist’s imagination, which
sometimes produces an intrinsic tension between the charm of
the work they are developing and its inal result (p. 253). This
drive (intrinsic and extrinsic) is also absolutely true to intermedia
practice and thought.
The identiied afinities are also sustained by the arguments
of Spector and Spalding discussion (2003) about art and chemistry, in which they identify, for instance, the resource to metaphor,
transformation, synthesis, the production of products, symbolic
language, experimental vocabulary, tools and equipment. These
resources are undeniably recurrent both in the ields of intermedia and chemistry. Kultermann (1980) had already identiied one
of these resources as characteristic of intermedia when he states:
‘One of the characteristics of intermedia is its synthesizing character’. Roald Hoffmann is another convinced supporter of the deep
connections, afinities and relations between art and chemistry.
One of them is precisely the creation of media (any type of media,
from material to conceptual) previously non-existent. Hoffmann
(1993), that locates chemistry in the ield of science, states:
83
Art and science share a desire for knowing that which is not yet known.
They share so many things: the nature of inquiry, the intellectual process, the formulation of ideas, a concentration on the observable, a
deep examination of the nature of perception and the ways perceptions
change with the observer. Chemistry and art synthetize by melding old
knowledge with new observations to provide us novel concepts of nature
or of the human relationship to nature. (p. 9)
In this way, intermedia generates something new, in the threshold, from the transformation of existing media.
Spector (2003) states:
Much of the identity of chemistry as a discipline is related to the generation of materials that have not existed before and have no natural
equivalent, rather than to understanding what exists in the natural
world – what chemists like to call “novel” molecules, compounds, or
materials. To me, all these issues of natural/synthetic/imitation/novel
also relate to issues of originality, which is another point of connection
between chemistry and art (...). (p. 240)
If, in the previous quote, we substitute the word chemistry by
intermedia and the word materials by media, the sentence will
still make perfect sense and will reinforce one of the main ideas
of this paper: intermedia as an essentially presentative phenomenon, and not representative or mimetic, as so many of the more
traditional artistic forms.
If intermedia generates new media from the transformation of
existing media, then we can both afirm the actuality and antiquity of intermedia, because the thought and practice of generating media from the transformation of existing media is a constant
throughout human history. Nevertheless, intermedia has not
always been recognized and validated in the artistic and academic
ields, which were conceived as a hierarchic system of knowledge,
of disciplinary division that evolved from the millennial ission
between rationalism and idealism.
Another afinity between intermedia and chemistry is related
with the question of ‘indiscipline’ and the breaking of disciplinary limits, as a quality of the intermedia. Chemistry can also
be indisciplined, for according to Spalding (2003) ‘It is surprising
that chemistry can take us outside the bounds of more traditional
notions of scientiic reason.’ (p. 236) Hoffmann (1993) states that
‘the aesthetic principles of science are not that different from
those of art. Beauty, elegance, deep understanding are sought by
chemists just as much as they are by artists’ (pp. 8-9). We may
conclude that intermedia is precisely a transformative, laboratorial, experimental, synthesis process, able to generate new media.
84
2.2 Serendipity, indetermination and
autopoiesis
When he was co-organizing the book/exhibition ‘Chemistry
Imagined’, Hoffmann experienced indetermination and serendipity, as he explains by stating that his ‘initial conception […]
was typically scientistic, therefore linear. (…) But the nature of
the creative process has ways of subverting such linear plans.
And the work of art (…) carves out its own space.’(1993, pp. 9-10)
Hoffmann conirms the idea that often the artist’s production is
induced and oriented by the process of production of the work,
process in which all the (in)determinants should be included.
Indetermination deserves, as an intermedia quality, a deep discussion, and it is well to remind here that, for some scientists and
philosophers, indetermination is also a quality of science. Elstob
(1986) refers to how Karl Popper demonstrates that ‘even within
its own conceptual framework the deterministic scientiic view
exhibits an inherent indeterminism.’ (p. 80) Ilya Prigogine – a
physicist and a chemist – challenged scientiic determinism as he
afirms indeterminism and chance as integral parts of systems
theory. According to Elstob (1986):
‘(In Prigogine’s treaty), the indeterminism arises from thermodynamic
bifurcation points where random events are what determine the future
course of a system. A consequence of this view is that structures that
now exist in the whorl may have resulted from purely chance events,
thus denying the universal operation of determinism.’ (p. 80)
Zatti (2003) discusses the possibility that the nature of the
universe might be accidental. Indetermination as something
inherent to life and nature justiies its use by artists that try to
approach and integrate their work in life and nature. In what concerns intermedia, indetermination is associated to its creative
processes, taken as open autopoietic, transdisciplinary and indisciplinary systems.
We use here the expression ‘open autopoietic’ systems as a
meta-abstraction from the concept of ‘autopoiesis’, introduced
by Maturana and Varela (1980). To be more precise, we use it
as an abstraction of Luhmann’s concept of ‘autopoiesis’, which
is already an abstraction from the Maturana and Varela’s original concept. To these authors, ‘the establishment of any system
depends on the presence of the components that constitute it,
and on the kinds of interactions in which they may enter.’ (1980, p.
95) Kultermann (1980) also characterized intermedia as an open
system, giving as an exemple the relationship between audience
and work. According to Seidl (2004), in the original concept of
85
autopoiesis, ‘the elements of aupoietic systems are not produced
by something exterior to the system’, i.e., ‘all the processes of
autopoietic systems are produced by the system itself’. Seidl
states that ‘autopoietic systems are operatively closed’, because
there are no operations coming from the outside entering the system, nor vice-versa.
Luhmann indirectly applied the concept to sociology, as a radical abstraction from the original biological concept, transforming it into a general, transdisciplinary and open concept of autopoiesis. Luhmann states that ‘the emerging insight is that the
phenomena of interest for evolution are special kinds of systems:
open systems, that is, those that can exchange energy, matter
and information with their environment.’ (1986, p. 148) A decade
later, Guattari (1995), with or without knowledge of Luhmann’s
abstraction, also reformulates and expands Maturana’s original
concept, as when he writes:
Autopoiesis deserves to be rethought in terms of evolutionary, collective entities, which maintain diverse types of relations of alterity, rather
than being implacably closed in on themselves. In such a case institutions and technical machines appear to be allopoietic, but when one considers them in the context of machinic assemblages they constitute with
human beings, they become ipso facto autopoietic. (1995, pp. 39-40)
According to Doruff (2008), Guattari appeals creativity as a
means to expand homeostatic, self-referential and closed systems,
turning them into open, new, imaginative systems. Hall (2010)
considers that ‘applied to aesthetics, autopoiesis replaces an
external objective view of art with an internal relativistic understanding of creation. And this can be described as a self-functioning system of aesthetics that is open to negotiation.’ Spielmann (2005) states that intermedia works are formed through
the exchange and transformation of elements originating from
different media. In this perspective and on these grounds we may
abstract the expression ‘open autopoiesis’, open to exchanges
with the exterior that contribute to the modelization of the structure of intermedia systems of creation and production, as well as
of their resulting works. Maturana’s closed concept evolved to an
open concept of autopoiesis, so that we may say that the dynamics of intermedia processes are open autopoietic dynamics.
The spectator may be one of the components of the intermedia
system, i.e., a medium. Having no intention to deny the Author’s
accountability, we afirm that the hybrid Author/Artist/Designer/
Producer has no nuclear role, but is merely a necessary component of the intermedia system, is merely also a medium. Thus, the
intermedia follow from the interaction of the media that compose
86
it. If we consider interaction and exchanges between media as a
communication phenomenon, then intermedia is indisputably
interactive and even enactive.
Clark (2011), who studied autopoiesis as a basis to the expansion
of interactive artistic system, recovers the classical model of the
artistic system proposed by Cornock and Edmonds (1973), composed by the artist, the participants, the work, the environment
in which these elements are placed, and the dynamic processes or
interactions that follow from this constitution. On the contrary,
this paper suggests that intermedia emancipates from the system,
posing itself as its own system – open autopoietic – of conception,
production and materialization, and not as an element or component of the system.
In regard to the application of autopoiesis to esthetics, Hall
(2010) considers the interactive art work as an evolutionary system, where the art object and the spectator become co-organizers,
creating an emergent esthetics, one that we might call endoesthetics. Hall’s idea is very close to the concept of enaction when
we take into account the capacity of sharing proposed by Teixeira
(1998), when he considers that the ‘activity of communication
doesn’t consist in the transference of information from the emitter to the receptor, but in the mutual modulation of a common
world through joint action.’ (p. 147) The joint action of human
beings as intermedia media confers its evolutionary character of
an enactive interface. Intermedia is then a kind of hybrid of living beings and non-living beings, such as mechanic, electronic or
informatics systems. This hybrid character is sustained by Barton
(2008) when he states that ‘intermedia often attempts to enact
the symbiosis of body and machine, locating each within the lived
context of contemporary experience.’
Let us remind the ‘autopoietic principle’ of the ‘hybrid constitution’ proposed by Francesco Monico (n/d.):
Every living and non-living being has to be respected in its “self-creation” and in its expression of a fundamental dialectic between structure, mechanism and function. As an organized unity, as a network
of processes of transformation and destruction of components which
through their transformations continuously regenerate and realize the
network of relations that produced them, and constitute it as a concrete
unity in space in which they exist by specifying the topological domain
of its realization as such a network.
This principle indicates the ambiguous character of the intermedia as a consequent unity of its own autopoiesis: the unity of
the unique, of the uniform, of the conform and of the homogeneous, but also the unity of the deformed, of the diverse, of the
87
distinct and of the heterogeneous. Coleridge, in 1817, deined the
‘intermedium’ as a catalyzer, Cornock and Edmonds, in 1973, considered the artist as a catalyzer of creative activity, and in 2013
‘Ars Electrónica’ promotes an exhibition commissioned by Manuela Naveau, whose theme was the artist and the work as catalyzers.
The untimeliness of the association between the terms art work
and/or artist and catalyzers gives intermedia constant actuality.
2.3 Multimedia and Intermedia
Intermedia term, coined by Higgins, is previous to Multimedia
term (Zuras, 2010). In order to clarify the distinction between
intermedia and multimedia, let us hear what Frank (1982) wrote
about this subject:
Intermedia, in effect, denotes the wholly hybrid art forms that result
from a seamless fusing of approaches and attitudes originating in the
traditional arts. The elements in Wagner’s operas – music, libretto, stage
design and costumes, dance (such as there is) – can be functionally
isolated from one another without complete loss of coherence or even
integrity. (1982, p.58).
Gesamtkunstwerk is considered to be a multimedia work. As a
combination of visual and sound arts, Frank places it in the category of those works which merely overlay media. Frank refers that
‘the cross-referencing and combining aural and visual art is part of
a wide realm of cross-artistic and even pan-artistic activity which
has pertained for centuries. Multimedia manifestations comprise
part of this activity.’ (Frank, 1982, p. 58) Ox (1999a) emphasizes
the complete difference between the concepts of intermedia and
multimedia. To this author, whereas in multimedia the content is
simultaneously presented in more than a medium, the intermedia
combines structural and syntactic elements of different media in
a single new medium. Higgins (1998) established a differentiation
between intermedia and multimedia, recognizing that the former
was a kind of conceptual fusion. Spielmann clariies the distinction between intermedia and other approaches, such as multimedia, hypermedia or mixed media. The author argues that the latter
may be compared between themselves because they describe the
‘expansion of a singular media in terms of accumulation’, whereas
in the intermedia, instead of accumulation, the expansion results
from a process of ‘transformation’. In spite of being considered
merely multimedia, Wagnerian Gesamtkunstwerk, according to
Wurth (2006), premeditated, but didn’t achieve, the total connection of all the media in an amalgam with no origin. And, according
88
to Wurth, Wagner’s approach failed because it afirmed the separation and the hierarchy between the media.
Wurth (2006) writes: For Wagner’s programme of ‘‘together-art’’ feeds,
precisely, on medial limits: in his outlook of the artwork of the future
he starts from a hierarchy of the temporal (‘‘human’’) over the spatial
(‘‘plastic’’) media and, moreover, situates each of these media within
their conventionally assigned domain. Thus, painting and music or
poetry are not so much fused as put together in the sense of combining
while retaining their respective roles. (p. 7)
This point of view sustains that Wagnerian opera, oriented to
the proscenium, is not a transformative amalgam, a confusion
between arts and senses or a contamination between the media,
but only a mere combination of separate parts. Meier (2012, p.
134) denominates Gesamtkunstwerk as ‘non total’, for the media
merely collide one with each other, in a multimedia dynamics,
instead of interacting one with each other, as it happens in intermedia dynamics. Meier further distinguishes Gesamtkunstwerk
from intermedia, insofar as, in the irst, different media cooperate in a complementary way aiming totality, while in intermedia,
what happens is precisely the deconstruction of the total art work.
Meier remarks: ‘It is in this sense that the Gesamtkunstwerk aims
at the full representation of human experience—the total work of
art that should express all of life’s experiences, but does not create a new life experience.’ This last argument from Meier allows
us to sustain that intermediality pretends to afford new life experiences, to expand the scope of experiences lived by the public,
and not to make any kind of representation.
Although there is a clear distinction between multimedia and
intermedia, the two can be related. Glusberg (1980) considers
that the multiplicity of media, multimedia, is the infrastructure
of intermedia, conceived as the totalization of the artistic forms.
This author sees the intermedia as a revolution of total scale in art
and afirms that the intermedia, ‘in addition to being multimedia,
is also transmedia’. Dias (2012), who also considers the opposition
between the terms ‘multi’ and ‘inter’ in several contexts, considers the ‘multi’ as the conirmation of diversity and plurality. The
‘inter’, to Dias, is relational. Dias suggests that the multimedia
‘come’ before the intermedia and afirms that the relational character of the intermedia plays a constitutive role: ‘(…) the time of
relation between the media is the time of production of the media
themselves’. In other words, Dias sustains the idea that intermedia production is the production (synthesis) of new media.
89
In a cultural context full of frontiers, Fornäs (2002) afirms the
advantage of the relational of the ‘inter’ over the pluralism and
the combinatory of the ‘multi’:
The general pluralism of the multi- has its very important points, but
the relational inter- opens up wider doors toward new kinds of processual cultural studies, by allowing for a great range of different kinds
of connection, beside the mere addition of elements. This stress of the
inter- is a way to navigate away from the traps of structuralism and systems theory, where dynamic relations tend to become petriied into relatively closed totalities. (Fornäs, 2002, p. 16)
Fornäs describes intermediality not only as relational, but
mainly and precisely as the mixture of the breaking of the
rules and the transgression of frontiers and boundaries. To this
author, the liberation of disciplinary restrictions is one of the
necessary conditions to creative culture. To Fornäs, multimedia
are only combinations of separable media, while the intermedia
concern ‘the passages between the media that demand thresholds’. We may conclude that intermedia is also a ‘crossing-ield’, a
hybrid ield of construction that operates in relating that which
was separated and disperse. In Fornäs’s thought, it is important
to consider that the operations and mixtures ‘of’ and ‘between’
the media demand for human agency and contextualization. The
media relate through human contextualized interaction.
Intermediality (…) is when media (…) are connected by speciic people
(interpretive communities) in speciic settings (physical, virtual and
social spaces). (…) People necessarily mediate between media and media
between people. (Fornäs, 2002, pp. 19-20)
2.4 Futurism and Dadaism
Shatnoff (1967) considered that some of the irst Dadaists shows
in the twenties were in fact intermedia. Gilbert Chase (1967) indicates the work of Cage in the beginning of the ifties as seminal
to the development of what would later be designated as intermedia, in at least two aspects: the suppression of musical notation, as an opportunity to open the space-time of the work to the
acting of the performer, and the random happenings generated
in/by the environment. Kirby (1965) afirms that each one of the
dimensions of Cage’s work was already preigured in the works of
Futurists and Dadaists. In 1913, the Futurist Luigi Russolo writes
a letter/manifest – ‘L’arte dei rumori’ – that Christensen (2009)
considers one of the most inluential texts in the musical esthetics of the twentieth century. In it, Russolo (1967), who radically
90
desired to change the perception of what might be considered
music, expresses his claim that the noisy sounds of machines and
urban life should be considered as musical tones and timbres. As
the ‘arte dei rumori’, the ‘esthetics of noise’, also designated as
‘bruitism’ and explored by the Dadaists, also aimed, according to
Niebisch (2013), ‘to end this chauvinism against noise’.
In this sense, Futurists advocated a shameless attitude in relation to the use of any media in the artistic event. According to
Tisdall (1978b), Futurists turned their back to the ‘closed’ life of
the ‘intellectually cultivated’, a gesture that might be seen as an
anticipation of the transgression and challenging attitude of the
intermedia before institutional schemes and conventional deinitions. Another aspect of Futurism that anticipates what would
come to be the practice of Dadaists and integrates our concept of
intermedia is the use of the public spontaneous reaction. Marinetti, according to Tisdall (1978a), by expanding the new form of
performance, includes a greater degree of audience participation.
The rather extreme use of the audience by Dadaists is even considered by Niebisch to be parasitic.
Foster (2003) considers that another aim of Dadaism was pandemonium, a total mess, the creation of a tumultuous and no-rules
place. Kristiansen (1968) corroborates this idea of pandemonium,
by arguing with the fact that Dadaism had a clear and unmistakable inluence on ‘happenings’, and he quotes Clau Backman
use of the same term: ‘orgiastic pandemonium’. Kristiansen considered Dadaism as the opposite of an artistic movement, as a
denial of all the schools, born of a necessity for independence and
of a distrust before unity. The more important Dada strategies
were the ‘invention’ of the ‘readymade’, the use of collage, of the
assembly and the implantation of chance. These strategies, in
addition to being mechanisms for the materialization of artistic
objects, are also a resignation of the more traditional forms of
artistic work. To buy, to edit, to ix, these were the new working
forms, at that time far from being as familiar and applied as they
are today. Walter Benjamin, in his polemical essay ‘The Author
as a Producer’, congratulates Dadaists, pointing out the revolutionary force of Dadaism in the fact that it deied art’s authenticity (Benjamin, 1934/2008). The artists that used chance were yet
more challenging to the traditional modes of artistic work; either
it was a found object or an automatic drawing, chance allowed
artists to abandon the inal control over their art works, simultaneously diminishing the quantity and the effect of their labor.
Dadaism, according to Kristiansen (1968), anticipated and inluenced the ‘happening’ through the heterogeneous mixture of distinct forms of expression, specially by way of three of its theories:
‘bruitism’, simultaneity and spontaneity. The two last theories are
91
essential in the construction of the intermedia concept, for intermedia works generally exhibit simultaneity of media in shaping
the work, and also spontaneity of the human being, since he is
free to act without no previous orientation or staging.
We can’t deny the inluence that the ‘luxus’ movement had
in what Higgins designated as intermedia. Yet, in this paper we
strategically neglected that inluence – given its temporal proximity to Higgins – and chose to discuss the antecedents that preigured some of the strategies identiied as characteristics of the
intermedia.
2.5 Readymade and Happening
The idea of pure media, pure formats, is inappropriate to understand intermedia dynamics. Higgins coined the term Intermedia
as a way of criticizing the separation, distinction and categorization of the media used in Art. These ‘almost mechanical’ hierarchies and separations, emphasized during Renaissance, lasted
at least until XX century and was associated to society’s division
and subdivision in classes. According to Higgins (1966b), the tight
division between social classes was absolutely irrelevant, and he
also considered unnecessary the observations of art that aimed
only to shelve it inside one or another particular category.
To Higgins, the Ready-made and the Happening break the idea
of pure media or formats:
The ready-made or found object, in a sense an intermedium since it was
not intended to conform to the pure medium, usually suggests this, and
therefore suggests a location in the ield between the general area of art
media and those of life media. (Higgins, 1966b)
We can see this new intermedia space in Duchamp’s urinal: ‘He
took an ordinary article of life, placed it so that its useful significance disappeared under the new title and point of view – created a new thought for that object.’ (Harrison & Wood, Eds., 2003)
The subversion or transformation of the functionality of a media,
evident in the ready-made, is also one of the main principles of
intermedia – to liberate the media from their original functions,
opening the possibility to create new thoughts, new ideas and
new functions to already existing media. We may conclude that
the functional transformation of objects, materials and e languages integrates the process of creation and materialization of
intermedia works.
We might speculate that this ability to transform the functionality of the objects is one of the characteristics that distinguish human beings from the other animals. Schneider (2000)
92
introduces in the artistic, political and social lexicon the term
‘nomadmedia’, concerning the nomadization of media. In spite of
having been created in the context of political, social and artistic
activism, this term is useful to designate the intermedia liberation and subversion of existing media. In the actual context, we
may deine the nomadization of media as the process of transporting media out of their original contexts and functions to operate
in other contexts. Duchamp’s practice, in the ‘Fountain’, points
to the concept of ‘prosume’, which shows up as inseparable from
actual intermedia dynamics.
Intermedia practice and thought freely combines the production, the consumption and the re-using of media. We may for
instance underscore the use of hardware and software (as programming environments and languages), for both uses are simultaneously production and consumption acts. We can consider as
readymade either an ‘arduino’ or a programming language, ready
to be consumed as they in fact are; but, simultaneously, they also
imply the production of an electronic circuit and a program.
Higgins refers the inclusion and participation of the spectator in the ‘happening’ and underscores Kaprow’s work as a pioneer to this kind of artistic event, emphasizing his philosophical approach to mediation in the relation between spectator and
art work. Higgins (1966b) criticizes ‘proscenic theatre’, with its
mechanical division of actors, production staff, audience, argument and script, for its lack of portability and lexibility.
Thus the Happening developed as an intermedium, an uncharted land
that lies between collage, music, and the theater. It is not governed by
rules; each work determines its own medium and form according to its
needs. The concept itself is better understood by what it is not, rather
than what it is. (Higgins, 1966b)
As it is impossible to give an objective deinition of what intermedia is, we should keep in mind that Higgins confronts two possibilities: Intermedia as a huge and inclusive artistic movement,
or, by contrast, Intermedia as an inevitable and irreversible historical innovation in reaction to the compartmentalization of history itself.
Higgins introduced the term Intermedia in February 1966, and
in the next month Alan Kaprow (1966) uses it in association with
fusion and hybridization taken as parallel forms of a thought that
is closer to life. This was the second written register of the term
Intermedia. We should not minimize the considerable immediate
impact that the ‘readymade’, the ‘happening’ or intermedia had
in the artistic context. In the same year, Corrigan (1966) immediately refers to intermedia experience and the happening as
93
signs of new forms of expression with unpredictable evolution.
The inclusion of the audience, as much in Futurists and Dadaists works as in the happening and the intermedia, as a resource
involved in the work’s materialization, brings indetermination to
the work itself. Almost 50 years later, the questions and drives of
the readymade and the happening explored by the intermedia are
still open and actual.
2.6 Intermedia Space-time
We can hardly categorize intermedia works as uniquely sculptural,
plastic, musical, or architectonic, because they don’t exclusively
frame in any of these categories, while at the same time they in
some way frame into each one of them. They are the product of
interactions between independent space-time systems (Ox, 2001),
they occupy a hybrid and ambiguous space-time. Cseres (2009)
places intermedia work in that space-time between media, codes,
types, genders, forms, tools and institutions. Cseres states that
intermedia works defy conventional classiications, institutional
schemes, as well as conventional deinitions of art and creativity.
Intermedia space-time, between categories, between media,
between concepts, is not the void; on the contrary, it is a spacetime illed by possibilities, countless combinations and conigurations. Fornäs (2002) alludes to this intermedia space-time as
a transgression space-time, due to the fact that the intermedia
operation occurs precisely at the threshold zone of the media, the
disciplines and the concepts. In order to ground his idea of the
threshold as a zone of space-time and not a boundary, a border
or a division, Fornäs uses Walter Benjamin’s conceptualization of
the threshold:
The threshold must be carefully distinguished from the boundary. A
Schwelle – threshold – is a zone. Transformation, passage, wave action
are in the word schwellen, swell, and etymology ought not to overlook
these senses. (Benjamin, 1999, p. 494)
A threshold is a transition zone, while a border is a line that
separates. Borders inhibit movements, while thresholds invite
innovative change. As a matter of fact, these thresholds seem to
be part of human nature, for, according to Fornäs, human communication and interaction are recognized as sources of threshold experiences.
Baker (2003b) designates the intermedia space-time as
‘betweenness’, and it seems that he attributes elastic and lexible
properties to this betweenness, as when he states that it stretches/
expands media deinitions, an expansion that occurs either
94
‘in-between’ or ‘inside’ the media themselves. Meier (2012) states
that this space-time, which he designates as ‘space in-between’,
‘(…) has the potential to create genuine thought as an event within
the concentrated form of intermedial artwork.’ In other words,
intermedia space-time is simultaneously a thought space-time
and a thought generator space-time. Ascott (2013) designates this
space-time as interstitial, and he proposes the concept of ‘interstitial creativity’ in reference to any type of practice that operates
between the borders of media, gender or types of knowledge, with
no recognition of any kind of hierarchy between them. We may
conclude that these intermedia dynamics operate precisely in this
interstitial space-time, full of matter and structure.
Intermedia thought and practice also have a common denominator in the holistic attribute. Although there are not many bibliographical references that use the term in association to intermedia, we may indicate Friedman’s contribute (2007), where he uses
the term as an adjective to qualify the intermedia as a holistic or
uniied program, in order to distinguish it from other concepts
such as multimedia.
The hybrid character of intermedia is another consensual and
transversal concept. Higgins (1967) states that intermedia covers the art forms that are ‘conceptual hybrids’, in-between two or
more traditional media. Frank (1982) states that the more radical
aspects of artistic crossings should be considered under intermedia scope, considering intermedia as the totality of the hybrid art
forms. McCombe (2006) presents intermedia exactly as a synonymous of hybrid:
These three works can be regarded as hybrid or intermedia works in which
traditional art form boundaries are blurred through the intertwining of
music, text, video and performance. (…) I believe that a hybrid or intermedia arts practice provides a much more fruitful and exciting creative
vehicle, both in terms of the individual composer/artist/creator and in
terms of the development of new work that articulates a variety of relationships between art forms and media. (McCombe, 2006, pp. 299 & 309)
McLuhan (1994) refers to the hybrid meeting between two
media as an occurrence with great artistic, social and physical
transformation potential, arguing that the ‘meeting of two media’
can, amongst other possibilities, create new forms. ‘The hybrid
or the meeting of two media is a moment of truth and revelation from which new form is born.’ (p. 55) Kase (2009) shows that
McLuhan used hybrid projects which functioned as experiences
able to challenge the social conventional patterns of perception
and thought. Friedman (2007), also highlighting the hybrid character of intermedia, paraphrases Higgins’s concept: ‘the term
95
intermedia referred to art forms that draw on the roots of several
media, growing into new hybrids.’ (p. 14)
It seems consensual that the intermedia generate something
liminal and new. Ox (2001) conirms this approach: ‘Intermedia is
a combinatory structure of syntactical elements that come from
more than one medium but are combined into one and are thereby
transformed into a new entity.’ (p. 47) Dorles (1980) refers to
osmosis, symbiosis and the conluence of the varied artistic languages as a trend that fosters contamination between languages
and counteracts the ‘stagnation’ of pure languages. Dorles relates
intermedia with the new technological and mechanical discoveries, considering it as creator of a new language and of linguistic
speciities constituted by the adoption of several codes. We would
like to highlight the idea, in Dorles thought, of the intermedia as
a renaissance of the global creativity of the human being. Ascott
(2013) afirms and actualizes this concept by designating it interstitial creativity:
Artists will look anywhere, into any discipline, spiritual or scientiic,
immediate or distant in space or time, any technology, ancient or modern, to enable the untrammeled navigation of mind, and the open-ended
exploration of consciousness.
We recognize no meta-language or meta-system that places one discipline or world-view automatically above all others. We look in all directions for inspiration and understanding: to the East as well as the West;
the left hand path as well as the right; working with both reason and
intuition, sense and nonsense, subtlety and sensibility.
Synthesizing all the perspectives discussed above, we may say
that intermedia operates, not only in the interstitial space, not
only between boundaries and borders, but also, and mainly, at the
threshold of media.
3 Conclusion
The mixture and fusion of media, out of which new media emerge,
is perhaps a constant in the history of mankind. Nevertheless, a
proper designation to this kind of activity was only established in
the ield of arts around the sixties of last century. Even the subversion and transformation of media functionality, validated in
the artistic ield with the introduction of Duchamp’s Ready-made
and the changes brought out by Futurist and Dada movements, is
also a constant in the evolution of mankind, for human beings
have always felt the will and want to create new media and transform them. From this untimely point of view, the age of the media
96
has no longer any sense, because all of them were young and will
become old, all of them were high-tech and will become low-tech.
Intermedia is then as old and modern as the human being, in spite
of the fact that artistic and academic ields not always have suficiently integrated and validated it.
Intermedia dynamics, besides operating at the thresholds
and in the interstitial space-time of media and disciplines, can
also integrate interdisciplinary, transdisciplinary and multidisciplinary actions. However, the aim of this paper has been the
characterization of intermedia dynamics as mainly indisciplinary,
considering that the preix ‘in’ can be simultaneously interpreted
as a negative value or as a place and movement inside discipline.
Thus, we may characterize intermedia dynamics of practice and
thought as indisciplinary, for, following the requisites of its own
actions, they indiscriminately act ‘in’, inside the disciplines,
using their most deep principles and premises, but also ‘in’, indecently negating those same principles and premises. In this way,
we simultaneously challenge and actualize Higgins proposal, for
whom intermedia operated essentially between media, between
disciplines.
References
Ascott, R. Interstitial creativity: art, mind and technology. Presented at
the COST Arts and Technologies Workshop, Zagreb, Croatia. Retrieved
from http://www.cost.eu/download/41192, 2013.
Baker, G. Reanimations (I). October, 104, 28–70, 2003.
Barton, B. Subjectivity [] Culture [] Communications [] Intermedia: A
Meditation on the“ impure interactions” of Performance and the“
in-between” Space of Intimacy in a Wired World. heatre Research in
Canada/Recherches Théâtrales Au Canada, 29(1), 2008.
Benjamin, W. The Arcades Project. (R. Tiedemann, Ed.). Harvard
University Press, 1999.
Benjamin, W. The Author As Producer. In M. W. Jennings, B. Doherty,
& T. Y. Levin (Eds.), The Work of Art in the Age of Its Technological
Reproducibility, and Other Writings on Media. Cambridge, Mass.:
Belknap Press of Harvard University Press, 1934/2008.
Chase, G. Composers Have Their Say. In Anuario (Vol. 3, pp. 101–110),
1967.
Christensen, R. C. The Art of Noise after Futurism. Nordic Network of
Avant-Garde Studies. Retrieved from http://www.avantgardenet.eu/
HAC/studentpapers/christensen_art_of_noise.pdf, 2009.
97
Clark, S. Revisiting interactive art systems. In Proceedings of the 2011
international conference on Electronic Visualisation and the Arts (pp.
205–205), 2011.
Cornock, S., & Edmond, E. The Creative Process Where the Artist is
Ampliied or Superseded by the Computer. Leonardo, 6, 11–16, 1973.
Corrigan, R. W. The First Ten Years. The Tulane Drama Review, 10(4),
17–19, 1966.
Cseres, J. In Between as a Permanent Status: Milan Adamčiak’s Version of
Intermedia. Leonardo Music Journal, 19, 31–34, 2009.
Dias, B. Media, Intermedia, Theory and Practice. Retrieved from http://
hyperinstrument.com/interviews/?p=440, 2012.
Dorles, G. Intermedia and Mixed Media as Sign of Crisis or a Rebirth of
the Visual Arts. In Theoretical Analysis of the Intermedia Art Form.
Salomon R. Guggenheim Museum, 1980.
Doruff, S. Who Done It? Ethico-aesthetics, the production of subjectivity and
attribution. In FLOSS+art (pp. 118–127), openMute, 2008.
Eisenkraft, A., Heltzel, C., Johnson, D., & Radcliffe, B. Artist as
Chemist. The Science Teacher, 33–37, 2006.
Eldridge, A. Re: Question, e-mail to André Rangel, 2013.
Elstob, C. M. Indeterminism in System Science. In R. Trappl (Ed.),
Cybernetics and Systems ’86 (pp. 79–86). Vienna, Austria: D. Reidel
Publishing Company, 1986.
Fornäs, J. Passages Across Thresholds: Into the Borderlands of Mediation.
Convergence: The International Journal of Research into New Media
Technologies, 8(4), 26, 2002.
Fornäs, J., Klein, K., Ladendorf, M., Sundén, J., & Sveningsson, M.
Into Digital Borderlands. In digital borderlands cultural studies of
identity and interactivity on the internet (p. 196). New York: Peter Lang,
2002.
Foster, H. Dada Mime. October, 105(Summer, 2003), 166–176, 2003.
Frank, P. Soundings at SUNY. Art Journal, 42(1), 58–62, 1982.
Frank, P. Intermedia, Multimedia, Media. Ken Friedman. Retrieved from
http://www.intermediamfa.org/imd501/media/1232972617.pdf, 2007.
Friedman, K. Soundings at SUNY. Art Journal, 42(1), 58–62, 1982.
Friedman, K. Intermedia, Multimedia, Media. Ken Friedman. Retrieved
from http://www.intermediamfa.org/imd501/media/1232972617.pdf,
2007
Gardinali, P. R. Chemistry in Art - Course Syllabus. Florida International
University, 2012.
Giannetti, C. Giannetti, C. Media, Intermedia, Theory and Practice.
Retrieved from http://hyperinstrument.com/interviews/?p=319, 2010.
Glusberg, J. From Leonardo to Intermedia Revolution. In Theoretical
Analysis of the Intermedia Art Form. Salomon R. Guggenheim Museum,
1980.
Guattari, F. Chaosmosis: An Ethico-Aesthetic Paradigm. Indiana
University Press, 1995.
98
Hall, J. An Autopoietic Aesthetic for Interactive Robotic Installation.
Jennifer Hall, 2010.
Harrison, C., & Wood, P. (Eds.). Art in theory, 1900-2000 (2nd ed.).
Wiley-Blackwell, 2003.
Higgins, D. Intermedia. The Something Else NEWSLETTER, 1(1), 1966b.
Higgins, D. Horizons. Roof Books, 1998.
Higgins, D., & Zurbrugg, N. Looking Back. PAJ: A Journal of Performance
and Art, 21(2), 19–32, 1999.
Hoffmann, R. Herbert F. Johnson Museum of Art annual report: 1992-1993
(pp. 9–11). The Museum, Cornell University, 1993.
Kaprow, A. Manifestos - A Great Bear Pamphlet. Something Else Press,
1966.
Kase, C. A Cinema of Anxiety: American Experimental Film in the Realm
of Art (1965–75). UNIVERSITY OF SOUTHERN CALIFORNIA, 2009.
Kirby, M. The New Theatre. The Tulane Drama Review, 23–43, 1965.
Kostelanetz, R. Dick Higgins (1938-1998). PAJ: A Journal of Performance
and Art, 21(2), 11–17, 1999.
Krisitansen, D. M. What Is Dada? Educational Theatre Journal, 20(3),
457–462, 1968.
Kultermann, U. Towards a Deinition of Intermedia. In Theoretical
Analysis of the Intermedia Art Form. Salomon R. Guggenheim Museum,
1980.
Levere, T. H. Transforming matter : a history of chemistry from alchemy
to the buckyball. The Johns Hopkins University Press, 2001.
Luhmann, N. The autopoiesis of social systems. In F. Geyer & J. V. der
Zouwen (Eds.), Sociocybernetic Paradoxes: Observation, Control and
Evolution of Self-Steering Systems (p. 248). SAGE Publications Ltd, 1986.
Maturana, H. R., & Varela, F. J. Autopoiesis and Cognition: The
Realization of the Living. D. Reidel Publishing Company, 1980.
McCombe, C. Videomusicvideo—composing across media. Contemporary
Music Review, 25(4), 299–310, 2006.
McLuhan, M. Understanding Media: The Extensions of Man (1st MIT
Press ed.). MIT Press, 1994.
Meier, J. Genuine thought is inter(medial). In B. Herzogenrath (Ed.),
Travels in Intermediality: ReBlurring the Boundaries (p. 286).
Dartmouth, 2012.
Monico, F. 9 - Autopoietic Priciple. The Hybrid Constitution, s.d..
Moody, A. E. JINS 331: The Chemistry of Art. Truman State University,
2000.
Niebisch, A. Feedback: Media Parasites and the Circuits of
Communication (Dada and Burroughs). Semiotic Review, (1), 1–7, 2013.
Ox, J. Introduction: Color Me Synesthesia. Leonardo, 32(1), 7–8, 1999.
Ox, J. Intersenses/Intermedia: A Theoretical Perspective. Leonardo, 34(1),
47–48, 2001.
Russolo, L. The Art of Noise. Something Else Press, 1967.
99
Schneider, R. Nomadmedia: On Critical Art Ensemble. The Drama
Review: TDR, 44(4), 120–131, 2000.
Seidl, D. Luhmann’s theory of autopoietic social systems. Munich
Business Research Paper, 2004.
Shatnoff, J. Expo 67: A multiple vision. Film Quarterly, 21(1), 2–13, 1967.
Spector, T. I., & Spalding, D. Between Chemistry and Art. HYLEInternational Journal for Philosophy of Chemistry, 9(2), 233–243, 2003.
Spielmann, Y. HistoryandTheoryofIntermediainVisualCulture
(Manuscript of the Paper Presentation). In H. Breder & K.-P. Busse
(Eds.), Intermedia: Enacting the liminal (pp. 131–138). Books On
Demand, 2005.
Teixeira, J. de F. Mentes e máquinas: uma introdução à ciência cognitiva.
Porto Alegre: Artes Médicas, 1998.
Tisdall, C., & Bozzolla, A. Literature and Theatre. In Futurism (Vol. 20).
Oxford University Press, 1978a.
Tisdall, C., & Bozzolla, A. The Means of Futurism. In Futurism (Vol. 20).
Oxford University Press, 1978b.
Wurth, K. B. Multimediality, Intermediality, and Medially Complex
Digital Poetry. RiLUnE, (5), 1–18, 2006.
Zatti, M. Indeterminacy a Necessary Condition for Free Will. Humanitas,
XVI(2), 107–118, 2003.
Zuras, M. Tech Art History, Part 2. switched. Retrieved from http://www.
switched.com/2010/06/03/tech-art-history-part-2/, 2010.
xCoAx 2015
Computation
Communication
Aesthetics
and X
Glasgow
Scotland
2015.xCoAx.org
Visualising
Electromagnetic Fields:
An Approach to Visual Data
Representation and the
Discussion of Invisible
Phenomena
Luke Sturgeon
Royal College of Art, London, UK
[email protected]
Shamik Ray
Copenhagen Institute of Interaction Design (CIID), Denmark
[email protected]
Keywords: Data Visualisation, Invisible Technologies,
Interactive Art, Data Expression, Communication, Photography,
Light Painting, Education, Critical Discussion
This paper presents the process, approach and results for Visualising Electromagnetic Fields. A project that produced a toolkit
and visual vocabulary for technological exploration – through
light-painting and long-exposure photography – to capture, visualize and communicate invisible electromagnetic ields that surround everyday objects. The project acts as a case-study to answer
a need for work that combines scientiic and artistic practices. To
create a open dialogue between the public, scientists, designers,
and engineers in a way that provides a visual language for understanding and critical discussion.
101
1 Introduction
In 2013 when the project was irst created and published there
were (and still are) many different tools, techniques, platforms,
sensors and processes for the capture, recording, visualisation
and quantiication of data. Especially ‘invisible’ data, which is
deined in this paper as statistical information than can be measured scientiically, but where the data describes an invisible phenomena that cannot be seen by the human eye.
Long before interaction design existed as a discipline, Manzini
argued that materials, including digital and interface technologies,
are under such rapid change that there is a widening gap between
them and their cultural understandings. (Manzini & Cau 1989)
The project developed from a need to address a growing concern.
As designers, scientists and engineers we often use and talk about
invisible technologies, but how can we be sure our own mental
model of these technologies is 1) accurate and 2) is a mental model
shared by others, primarily our target-audience? The project was
presented as an example of discursive design whereby the aim
was to provide a visual vocabulary that could allow for dialogue
(Bohm 1990) around an area of technology with limited public
understanding but wide public usage – electromagnetic ields.
2 Invisible phenomena
Electromagnetic ields (EMF) are an ideal example of a phenomena that are used and produced by most everyday technologies,
but most documentation and information remains in the scientiic and engineering domain. Information and explanation is not
usually designed for use by the general public. In recent years
the design industry – who work primarily in the consumer market, producing products, services and systems for public – have
actively promoted the use of seamlessness (Ratto 2007) in design
proposals and creative solutions. Deliberately ‘making invisible’
many of the technologies that make a system or product function
in a desired way.
An electromagnetic ield, as the name suggests, is made of 2
components, the electric ield and the magnetic ield. The magnetic ield can be understood using Ampère’s Law, which is an
electromagnetism law that relates the magnetic ield in a closed
loop or surface with the electric current circulating through that
same loop:
Fig. 1 Ampère’s Law (Duarte 2014)
102
The electric ield can be understood using Gauss’s Law, that
describes the relation between the electric ield lowing through
a closed surface, the Gaussian surface, and the sum of the electric
charges inside a volume, limited by that same surface:
Fig. 2 Gauss’s Law (Duarte 2014)
The project combined both, using the built-in magnetometer-sensor inside the iPhone 4 and LG Nexus 4 smartphones.
These sensors are capable of measuring both the magnetization
of a magnetic material like a ferromagnet, and the strength and
direction of a magnetic ield at a point in space.
2.1 Previous approaches to electromagnetic
visualisation
Electromagnetic ields have been the focus of scientiic and artistic study and material-exploration for several decades. From the
1820’s scientists such as André-Marie Ampère and Hans Christian Ørsted1 were demonstrating their scientiic and mathematical discoveries through the design of scientiic instruments that
could visualise the unseen magnetic forces around objects with
an electrical charge.
In recent years within the ield of Design, the fascination with
invisible phenomena has developed beyond data visualisation, to
allow provocation and a critique of previous modes of explanation and aesthetic methods of representation. In his book Hertzien Tales, Anthony Dunne dedicates a chapter to the “radio space”,
although this is not an attempt to visualize the invisible, but to
“explore new aesthetic possibilities for life in an electromagnetic
environment”.
Whereas cyberspace is a metaphor that spatialises what happens in computers distributed around the world, radio space is actual and physical,
even though our senses detect only a tiny part of it. (Dunne 1999)
3 Photographic approach
1 Oersted’s law. April 21, 1820.
Accessed at (http://en.wikipedia.org/
wiki/Oersted%27s_law)
There is a history of using photographic techniques in the sciences and Human-Computer Interaction (HCI) research to capture and understand complex or invisible interactions with the
physical world. To use photography as a visualisation tool that
can be used to track and display information.
103
Light painting is a photographic technique that has a history of
being used to track motion and changes in space, over time. One
of the earliest examples of this photographic technique is Pathological walk from in front, made visible by incandescent bulbs ixed to
the joints by Étienne-Jules Marey and Georges Demeny (see Fig 3).
The motivation is often because of the power of the photograph to
engage with an audience without a requirement for technological
understanding, in contrast to a diagram or statistical information
that requires mental processing and ‘reading’ of information.
Marey wanted to give a visible expression to the continuity of movement
[…] and to do so within a single image. Only this, he believed would give
him quantiiable results – photographs from which measurements could
be taken. (Braun 1994)
In addition to quantiiable information that could be stored
within a single photograph using the light-painting technique,
artistic expression and human emotion could be expressed and
shared. The photograph acting as a discussion point between
the author and the audience. Artists such as Marey & Demeny,
Gilbreth , Man Ray and Pablo Picasso (in collaboration with
Gjon Mili) used light-painting as creative tool for expression and
storytelling in their practice between 1930–1950 (see Fig 3).
Fig. 3 Light-painting works
(Marey & Demeny 1889),
Cyclegraph (Gilbreth 1914),
and “Picasso draws a Centaur”
(Picasso 1945)
3.1 Light-painting technique
The photographic technique used for the project is exactly the
same process as used by artists such as Picasso et al. to produce
their light-paintings of the 30s and 40s. A camera with a small
aperture size is placed on a tripod and the shutter is released for a
long period of time. Between 60 and 90 seconds were used for the
creation of all photographs in the project, but it is most likely that
Picasso and Mili would use a much longer time period, given the
capabilities of the camera equipment that was available to them
at that time. While the shutter is open, a strong light such as a
candle, led, or mobile phone screen can be moved in front of the
lens, creating a trail of light that is capture by the cameras digital
sensor (or photographic ilm).
104
When the shutter is inally closed, the remaining image will be
the result of all the light that travelled through the lens during the
entire exposure period. Both the bright light used for light-painting and any ambient or stage lighting that was also present.
4 Everyday objects
A key motivation for the project was to capture and communicate invisible phenomena surrounding technology, to an audience
without any prior technical knowledge or experience. We chose
everyday devices and household items as the central objects for
investigation, so that the widest number of viewers could identify
and relate to the project. Everyday objects such as an Apple MacBook Pro, Radio alarm, iPhone and Google Nexus 4 were photographed. The purpose was the engage with a wider audience and
provide a visual language and toolkit that can enable conversation and discussion across multiple disciplines, experiences and
understandings.
It is only through a process of exploration and revelation that we are
able to develop our ‘object-world’ understandings as designers, in order
to assemble new perspectives on, and meanings around, emerging technology. (Arnall 2013)
Through experimentation we developed an understanding of
both the photographic principles and limitations of light-painting, and the technical speciications of the magnetometer sensor
inside an LG Nexus 4. Using the open-source programming language Processing2 and the open-source Ketai3 software library to
access information from the phone’s sensors we created our own
simplistic real-time data visualisation application that could run
on any Android enabled device.
4.1 Data communication
2 Processing programming language.
Accessed at https://processing.org/
3 Ketai Library for Processing.
Accessed at https://code.google.
com/p/ketai/
After the creation of a very simple software tool, the next step was
to experiment with different visual languages that might help us
communicate and visualise the material qualities of the EMF that
can be detected around everyday objects.
Shape, size, colour, speed, depth, resolution and time were all
parameters that could be adjusted for each image. Through experimentation we arrived at a limited palette that could be successfully and repeatedly used to visualise and compare the EMF ield
of any object.
105
4.2 Artistic expression
Understanding and clear communication was always a primary
consideration for each aesthetic decision when creating the
images. But the success of the project relied on how engaged the
public would be with the images and videos that were published.
Therefore a careful balance was made between data representation and the generation of an attractive visual language that
would intrigue viewers and provoke conversation and discussion.
This artistic expression is present within every single photograph, from the lighting and composition to the crop of the
product and the resolution of the light-painting inside the photograph. Most obvious is the affect that the movement of the hand
will have on the inal photographic image. To avoid the further
removal of information from the photographic image we constructed systems and methods of rigor, so that we could repeat
the same movements and gestures to generate similar subsequent
photographic images that could be used to contrast and compare
electromagnetic ields. Still photography was used to identify
areas of EMF that could be detected and visualised, then the gestures were repeated – sometimes more than two-hundred times
– to create a photographic sequence that could produce an understandable moving image.
Fig. 8 Final visual language through
photography
5 Conclusion
The project set out to identify and communicate an invisible
aspect of our technological lives to a wider audience. To engage
with the public as well as design, academic, scientiic and engineering communities who might also ind the research and
approach valuable. Upon completing the project a two minute
106
video was created and uploaded to Vimeo.com and the majority
of photographs (both successful and failed) were uploaded to a
dedicated album on Flickr.4
The toolkit that was created for the purpose of the project has
been shared as open-source code.5 Allowing anyone with programming experience to download, modify, improve, and ideally
share their own learnings. To teach others and contribute to a
wider public conversation. An iPhone and Android version of the
mobile applications are available to download for free from the
respective app stores.
Fig. 9 Visualising electromagnetic
ields video (https://vimeo.
com/65321968)
5.1 Digital sharing platforms
As well as documentation, both these platforms allow image and
video content to be easily shared across the globe instantly. Using
the internet, email, blogs and social media to share links and
images that all link back to one another, provide a content loop
that connects and maintains a link between the produced content,
the research, written text, articles and press releases and perhaps
most importantly, a digital record of the comments and discussion that took place online.
4 https://www.lickr.com/
photos/luke_sturgeon/
sets/72157633310156013/
5 https://github.com/lukesturgeon/
iOS_EMF_Sensor
6 http://thecreatorsproject.vice.
com/blog/light-painting-theelectromagnetic-ield
5.2 Creating dialogue
After only two months of being published online, the project was
featured on the technology and lifestyle blog the Creators Project.6 The article was titled Light Painting The Electromagnetic Field
and was written in response to the project, featuring the video, a
selection of eight photographs and links to other ‘related’ projects and art; continuing the discussion and EMF and invisible
technologies.
107
We’re surrounded by things we can’t see. In a recent project the pair
decided to make visible the electromagnetic ield (EMF) that surrounds
many of the devices we use in our daily lives. To do this they used long
exposure photography and stop-frame animation to produce light paintings that show the EMFs that surrounds laptops and a old school tape
deck. (Holmes 2013)
The irst article was discovered by several other technology7,
lifestyle8, news9, fashion10, design11 and business websites12 and
since being uploaded to Vimeo on April 2013 the video has been
played more than three-hundred thousand times and shown at
ilm festivals and design events around the globe.
The phone was used as a kind of light brush, which reacted to the changing strength of the EMF, and long exposures allowed them to capture the
whole ield. Amusingly, the EMF from the laptop’s hard drive was strong
enough to stall the phone’s magnetic sensor – so there’s still room for
improvement – but the result is pretty cool nonetheless. (Condliffe, 2013)
7 https://www.prote.in/en/
feed/2013/07/visualisingelectromagnetic-ields
8 http://www.wired.com/2013/07/
the-invisible-images-coming-fromour-favorite-devices/
9 http://www.hufingtonpost.
co.uk/2013/07/03/laptop-invisibleforce-ield_n_3541122.html
The conversation and discussion that was provoked by the decision to publish the work on digital sharing platforms was one of
the largest success points for the project. The aesthetic and technical decisions that led the the creation of images and stop-frame
animations provided visual content that could be shared easily,
and were used alongside provocative article titles such as “Your
MacBook Has a Force Field. This Is What It Looks Like”. This led
to a series of conversations about technologies and phenomena
that we can and cannot see, how we measure these things and
how we visualise and express them. The conversations were provoked by the original work.
You might view your laptop as a nice, neatly contained unit – but there’s
more bursting out of it than meets the eye. In fact, all of its electrical
components create complex magnetic and electric ields that spread far
and wide, and this video shows you their reach. (Condliffe 2013)
10 http://www.esquire.co.uk/
gear/gadgets/4279/laptopelectromagnetic-forceields/
Though the images are beautiful, the information we can glean from
them is still abstract. (Stinson 2013)
11 http://www.creativereview.co.uk/
feed/july-2013/06/visualisingelectromagnetic-ields
The phone was used as a kind of light brush, which reacted to the changing strength of the EMF, and long exposures allowed them to capture
the whole ield. […] there’s still room for improvement – but the result is
12 http://www.businessinsider.com/
designers-make-force-ields-fromlaptop-and-iphones-visible-20137?IR=T
pretty cool nonetheless. (Condliffe 2013)
The images from the project focused on provocation and
engagement instead of the accurate reading of numeric and
108
statistical information. They exist as a way to excite and provoke
conversation.
5.3 Education
Fig. 10 Science Museum workshop
participants
In September 2014 the Science Museum13 in London requested a
participatory workshop building on the original concept in collaboration with the museums own collection of everyday electronic
objects that span decades of human invention and scientiic discovery. The 1-day workshop provided hands-on experience for
participants who were introduced to the photographic techniques
and given access to an improved version of the EMF application
that was created for the original project.
Participants of the workshop had backgrounds that ranged
from photography, design and art to software engineering, social
sciences and business strategy. They were paired and each given
a camera, tripod, and photography station. Through hands-on
learning and discussion they were about to develop thorough
understanding of light-painting and electromagnetic ields
within a few hours, producing over 300 photographs there were
presented by each pair at the end of the day.
Fig. 12 Workshop participant results
13 http://www.sciencemuseum.org.
uk/visitmuseum/Plan_your_visit/
events/media_space_events/ield_life_
of_electronic_objects.aspx
14 https://www.lickr.com/groups/
secretlifeofeverydayobjects/pool
In comparison to the original images, the photographic results
from the workshop demonstrate a preference for playful expression rather than comparison and communication of invisible electromagnetic ields. However the participants were able to understand and then explore the material qualities of EMF, in order to
achieve the expressive images they created.
The workshop concluded with a late-night public event held at
the Science Museum. The results from the workshop were displayed alongside the electronic objects from the workshop and
109
a working camera setup, so that visitors could learn through
hands-on demonstrations.
For us, the queue of visitors during the late night event was
the measurement of success for the overall project. Through
composed photography and careful consideration to the visual
language and presentation of scientiic phenomena we created
intrigue, then understanding and dialogue. As well as facilitating
discussion and conversation between visitors, through the presentation and explanation of the original concept, image-making
process and motivations for the project.
The inal work has been collected in a public Flickr group14 titled
“The Secret Life of Everyday Objects”. This approach allows anyone around the globe to contribute their own work and participate
in a discussion around invisible phenomena and technology. Using
existing photo sharing platforms and social media to engage with
a curious audience, regardless of expertise or available tools.
5.4 A new approach to visual data
representation
The project has resulted in a better understanding and case-study
for the engagement of a wider audience in the conversations
around technology, design and science. Through the careful representation of information in an accessible and comprehensible
visual vocabulary, open discussions can be achieved across discipline and regardless of technical experience. Provoking conversation and new work, through the collaboration of different
disciplines.
Acknowledgements.
The original project was created at the Copenhagen Institute of
Interaction Design (CIID) in response to a design brief set by visiting faculty Matt Cottam and Timo Arnall.
References
Arnall, Timo. Making Visible: Mediating the material of emerging
technology. 2014
Bohm, David. On Dialogue. Routledge, 1990
Braun, Marta. The work of Etienne-Jules Marey (1830-1904). University of
Chicago Press; New edition edition. 1994
110
Condliffe, Jamie. The Invisible Electronic Fields That Surround
Your Macbook. Gizmodo. Accessed at: http://gizmodo.com/
the-invisible-electronic-ields-that-surround-your-macb-656164816
Duarte, João. Electromagnetic Fields (EMF) in High Voltage Power Lines.
Accessed at http://thefragmentationparadox.blogspot.co.uk/2014/03/
electromagnetic-ields-emf-in-high_16.html. 2014.
Dunne, Anthony. Hertzien Tales. RCA CRD Research Publications, 1999
Gilbreth, Frank. Light Painting Photography, 1914. Accessed at http://
lightpaintingphotography.com/light-painting-history/. 2014
Holmes, Kevin. Light Painting The Electromagnetic Field. The Creators
Project. Accessed at: http://thecreatorsproject.vice.com/blog/
light-painting-the-electromagnetic-ield
Manzini, Ezio & Cau, Pasquale. The Material of Invention. Cambridge,
MA: The MIT Press, 1989
Marey, Étienne-Jules & Demeny, Georges. Light Painting 1889. Accessed
at http://lightpaintingphotography.com/light-painting-history/. 2014
Ratto, Matt. Ethics of Seamless infrastructures: Resources and Future
Directions. International Review of Information Ethics. 2007
Stinson, Liz. Your MacBook Has a Force Field. This Is What It Looks
Like. Wired.com. Accessed at: http://www.wired.com/2013/07/
the-invisible-images-coming-from-our-favorite-devices/
Data Exploration
on Elastic Displays
using Physical Metaphors
xCoAx 2015
Computation
Communication
Aesthetics
and X
Glasgow
Scotland
2015.xCoAx.org
Mathias Müller
Chair of Media Design, Technische Universität Dresden, Dresden,
Germany
[email protected]
Thomas Gründer
Chair of Media Design, Technische Universität Dresden, Dresden,
Germany
[email protected]
Rainer Groh
Chair of Media Design, Technische Universität Dresden, Dresden,
Germany
[email protected]
Keywords: elastic displays, information visualization, haptic
interaction
Elastic displays empower users to interact naturally through pushing and pulling, folding and twisting. While this kind of interaction is not as precise as on other devices, it utilizes interaction
metaphors which are easy to learn and understand. We present
a system that uses physically based interaction and visualization
metaphors to gain a deeper comprehension of the underlying data
and its structure. By applying pressure on speciic interface elements, associated items are attracted and repelled, the exerted
force on the items itself translates into a semantic zoom behavior to display more in-depth information about the speciic entity.
We present the core concepts of the system, explain the decisions
made during the design process and discuss the advantages and
disadvantages of the proposed system as well as a short view on
further improvements and open research questions.
112
1 Introduction
Interacting with complex visualizations is a common challenge.
Faceting data, selection strategies and the visualization of correlations between data points result in complex user interfaces
which lack intuitiveness and the option to explore freely without in-depth knowledge about the underlying concepts and data
structures. A possible solution to these issues could be elastic displays which offer an additional dimension of interaction. Elastic
displays that deform are a new ield in Human-Computer Interaction. Due to their elasticity, these displays allow users to change
the surface by pulling, pushing or twisting. Furthermore, elastic displays offer a unique interaction experience through haptic
feedback. The elastic membrane may be imprecise when compared to a mouse, but offers a rich multi-modal feedback which
facilitates interaction.
Fig. 1 Exploring different types of
visualizations by pushing and pulling
into the lexible surface
By extending the direct manipulation paradigm of touch
interaction with a large range of different interaction states coupled to the pressure applied on the surface, elastic displays offer
a rich and versatile interaction space. The deformation of the
surface addresses one of the core problems with current touch
devices – they offer basically the two states “on” and “off” for
touch recognition. More ine-grained touch interaction can only
be achieved with additional devices like pressure-sensitive digitizers or by utilizing the duration of the touch for emulating pressure. The irst option is useful in many scenarios, but lacks the
versatility of the human hand to form and execute different gestures and again puts a tool between the inger and the interaction
surface. The second solution represents a rather weak, indirect
substitute for real force sensitive surfaces.
113
With metaphors based on physical interaction, the interface can
generate additional cues to understand the data and connections
between elements of the visualization. We argue for using elastic displays to explore complex data sets. We present an approach
on data from a database of visualizations using an elastic display called FlexiWall. The data is taken from the DelViz database,
which consists of more than 700 different visualizations classiied
in hierarchically organized categories of keywords describing the
content of the different visualizations.
The goal of this paper is to present a novel approach for playful
and intuitive exploration of data sets using advantages of elastic
displays.
2 Related Work
In the last years there has been a lot of research focused on elastic displays. Cassinelli and Ishikawa irst published about an elastic display they called Khronos projector (Casinelli and Ishikawa
2005). Peschke et al. describe an elastic display used as tabletop
system (Peschke et al. 2012). In former publications, we classiied suitable data types and interaction techniques, based on the
experience of both DepthTouch (Fig. 2, left) and FlexiWall (Fig. 2,
right; Franke et al. 2014). The transfer of multi-touch paradigms
like gestures, tangible objects and their applicability in the context of deformable surfaces led to the deinition of a design space
for elastic displays. Gestures and other interaction techniques
like gravibles or geometric shapes are introduced in it (Gründer
et al. 2013).
Fig. 2 The DepthTouch prototype
in action (left). Core components
and system setup of the FlexiWall
prototype (right).
Troiano et al. identify gestures used on elastic displays by utilizing the guessability studies method. They ind that grab and pull,
push with lat hand, grab and twist, pinch and drag and push with
index inger are the gestures used most often for the interaction
114
in depth (Troiano et al. 2014) Regarding solutions for common
technical issues with elastic displays, Watanabe et al. describe
solutions to the projection and warping problems occurring while
pushing or pulling the membrane (Watanabe 2008).
Due to the prototypical character and the easy setup, elastic
displays have been used in the context of artistic installations
and demonstrators, which facilitate playful exploration. Examples for artistic installations are Cloud Pink and Soak, Dye in light
from everyware (everyWare01, everyWare02) and irewall from
Sherwood (Sherwood 2012). Use cases apart from the scenarios are
described in (Gründer et al. 2013), (Sterling 2012) and (Cassinelli
and Ishikawa 2005). One of the rare works describing data visualization on elastic displays is the ElaScreen, which utilizes an elastic display system for graph navigation scheme (Yun et al. 2013).
The presented work is based on the DelViz system (Keck et al.
2011). They classiied data visualizations from the visual complexity (Visual Complexity) collection with a faceted approach. They
then describe the multi-touch exploration software with focus on
the connection of facets and visualizations. As one of the core
advantages of elastic displays is their haptic nature, we decided to
follow the concept of physically-based interaction by Jacob (Jacob
et al.) to mediate correlations of objects and interaction with the
visualization. An example for concepts from physics applied to
interaction can be found in (Agarawala and Balakrishnan 2006).
They describe a system based on physical metaphors. They present a 3D visualization of Desktop Icons. The icons are inluenced
by physical interaction with a pointer. The pointer is able to grab
and throw the icons around.
3 Interaction Concept
We chose to use the DelViz classiication of visualizations. Every
visualization type is described by a set of metadata such as a short
description, title, web link and the date it was added. The visualizations are associated with a number of tags representing the
most important properties. These keywords are based on three
main categories: Data, Visualization and Interaction, and their
associated dimensions (Fig. 3). The tags in the dimensions are
competing terms to which the items are matched. However they
are not mutually exclusive, e.g. visualizations can combine text
and images, address both science an economy domain or employ
scrolling as well as Overview/Detail functionality. There are complex relations between the items based on their associated tags.
The relations are formed by the items they are assigned to. If
items are tagged as 2D and static, those two have a connection.
The user is able to ilter by several tags or deselect them in order
115
Fig. 3 The DelViz classiication
schema and associated colors used
in the prototype with example
visualization, associated tags and
description (bottom right).
to explore the dataset. According to the selection items are highlighted or diminished. The dataset contains of few major tags,
like 2D or Network, which are associated with about two thirds
of the items. Other keywords are assigned to only a handful of
items, what implies a rather unbalanced tag distribution. The goal
is to search and identify visualizations matching given properties, represented by their associated tags. The core concept is to
explore items based on weighting several tags. The concept used
for the prototype described in this paper originated on the work
about the DepthTouch (Peschke et al. 2012).
One of the applications of the DepthTouch was a simple physical simulation – spheres projected on the surface reacted to the
deformation by moving according to the resulting gravitational
forces of the deformed surface (cf. Fig. 2). Observations of users
show that interfaces based on easy physical concepts like gravitation, mass, spring forces or force ields are playful, easy to understand and to learn. Once people push the surface, the immediate
haptic and visual feedback helps to quickly form a mental model
of how the interaction works. In contrast to e.g. stacked images
which are selectively blended according to the deformation, which
require the user to associate the deformed surface to abstract data
or image layers, this “natural” reaction to the actions of the user
is immediately recognized and interpreted correctly. Users immediately know which actions they have to undertake to achieve a
speciic goal (e.g. to split a group of spheres by creating “holes”
on two opposite sides of the group) because they are used to these
simple physical principles from daily life. Therefore our implementation is based on the concept of simulating physical forces to
interact with a set of items. The core idea is that for exploration
of large data sets iltering and grouping of objects represent basic
tasks that can be translated into a simple physical simulation,
which allows it to collect objects by pushing into the depth and
separating items by creating peaks in the elastic display.
116
Fig. 4 Tags (blue) and inactive items
(grey) are loating on the surface
(top left). When pushing a tag into
the surface, it attracts associated
items, which change their shape to
show a thumbnail of the associated
visualization (top middle). When
two tags are activated by pushing
into the surface, associated items
are moving toward the gravitational
center (top right). Pulling the surface
towards the user pushes away and
deselects associated items (bottom
right). Details about the visualization
are displayed using a semantic zoom:
the more pressure is applied the more
information is shown (bottom middle
and left).
Transferred to the DelViz scenario, the tags of the dataset represent gravitation centers. Items have “natural” repulsion which
prevents them to be inluenced by the tags in their normal state.
However, when pushing into the surface at the position of the tag,
its gravitational force will be increased according to the applied
pressure and all associated items are attracted by it. They do not
just appear next to the tag but make their way to it. The movement
not only indicates the association but also the strength of this
association represented by the movement speed of an item. This
way the user can tap tags and observe which items are connected.
Additionally to the changing movement, the items’ representation contains a thumbnail of the associated visualization item and
additional information about associated categories. Thin lines
represent connections to other tags. Pulling the surface towards
the user reverses this force, so that items are pushed away (Fig.
4, second image) and fade out. By applying different pressures to
several tags, items are iltered and concentrate around the area
next to the gravitational center of all manipulated tags. Items
only belonging to one tag will move towards it. Items attracted
to more than one gather in their center (Fig. 4, third image). The
interaction is based on simple push and pull. Filtering is achieved
by applying different gravitational forces to the tags, while the
visualization of detailed information and connections between
tags and content are retrieved by activating an item. An item
again is activated by pushing into it. Items attracted to tags get
an image depicting the visualization they stand for, so the user
knows that these items can be selected. Fig. 5 depicts the combination of possible states for items.
The presentation of information for each item follows the
principle of a semantic zoom. Depending on the amount of pressure applied more or less information is displayed, starting from
displaying the title of the visualization and its connections to
other tags. Applying more pressure reveals a larger image the
visualization and inally additional context information about the
visualization, like the description or the associated web address is
117
displayed (Fig. 4, last 2 images). The same accounts for tags. The
more pressure is applied, the faster are associated items accelerated towards it. If a pull is affecting a tag, associated items are
repelled from it.
Fig. 5 Different representation of
items according to the strength of
the force applied to the associated
tag.
4 Design Process
One issue with elastic displays is associated with the question
how to motivate the user to touch the screen and push, pull or
somehow deform it. Based on observations with similar systems,
this represents a critical point. Once users have interacted with
the system or observed other people how they used the display,
the core concepts of the system should be quite easy to understand and become accessible by playing with the system. However, offering affordances for touching and deforming of a screen
is quite a dificult task, due to contrasting experiences of users
in current systems. We decided to offer subtle signs for interactivity – the items are constantly moving and from time to time
speciic items start glowing, revealing parts of the connections to
surrounding tags. Although this behavior only partially solves the
problem of users staying away from the surface, it provides clues
how to interact and should arouse curiosity about the system.
Another challenge was to create the physics system for the simulation of item and tag movement. We wanted to create a rather
simple system, which feels authentic to the user when interacting
with the system. However there are a quite large number of constraints resulting in a number of system parameters which had to
be balanced out to guarantee a certain stability and self-recoverability of its initial state after interaction took place. The system
basically computes two types of forces between items which are
based on their semantics:
(1) Forces between tags: tags sharing a large number of items
attract each other. Additionally, Tags belonging to different
dimension are pushed away.
118
(2) Forces between items: As mentioned above, items are
pushed towards active associated tags or their gravitational center or pushed away, if the tag is pulled out of the surface.
To prevent the system from getting into a stable state, where
tags and items do not move anymore, we added small centripetal
forces of random speed to each item. Additionally, the direction is
modiied randomly to achieve a steady, slightly chaotic low of the
visualization. Collision is based on forces degrading over distance
between objects. Similar collision forces prevent items from leaving the screen and push them constantly towards the center. As
we wanted to create a lexible system, which acts independent
from the visual representation and can also be conigured for
different associated data sets, these forces need to scale with or
adapt to the number of items and tags, the size of their graphical
representation, screen size. It is easy to change parameters at the
start of the simulation, like object size and intensity of applied
forces. Some parameters can also be changed dynamically during
the simulation, which enables a wide range of possible effects and
visualizations for different aspects of the system.
Fig. 6 Design iterations of Tags
(left) and visualization items (right).
Rightmost items represent the inal
versions of the item.
The design of tags and visualization items followed the idea of
gravitational forces between objects. As forces are acting equally
in every direction, the decision to use circles or spheres to represent objects was obvious. However, the question was which information should be displayed on the tags or the items. Tags are associated with a color representing the associated data dimension (cf.
Fig. 3). We decided to select three categories for each dimension.
As a result, we get nine possible categories a tag can belong to. In
the inal design, a tag is represented by a circle consisting of nine
segments representing the categories. Categories of the associated
dimension are drawn in their respective color, other categories are
greyed out. The associated category is drawn with a thicker line,
its name written outside the circle. The tag name as most important information is written in the center of the circle (Fig. 6).
The representation of items follows the same pattern: The
visualization is depicted by a circular thumbnail, surrounded by
119
circle segments representing tag categories, this item is associated with. If an item is associated with two or more tags of a category this segment is drawn in a solid color, in case of one tag it is
semi-transparent, otherwise the segment is not drawn at all. The
idea behind this visualization is that the user can identify similar
visualizations by their characteristic layout of surrounding circle
segments (Fig. 5, Fig. 6).
Fig. 7 Display of full details for a
selected visualization, including
a larger image, metadata and
connections to other tags.
The inal design also incorporates connection lines drawn from
active items to their associated tags (Fig. 7). The idea is that the
user gets a fast impression, which tags are relevant for further
iltering of items: If a tag is not connected (or only connected by a
few lines) with currently active items, pushing these tags will not
further diversify the selected set of items. Lines are stronger if an
item is connected to multiple selected tags (Fig. 1, right image).
5 Framework
The technical setup of the prototype consists of a standard Windows PC running the application, a large elastic fabric used for
back projection, a projector and a Microsoft Kinect as depth sensor. The Kinect is positioned next to the projector and tracks the
surface. Each point in the depth image delivered by the Kinect is
projected on the associated point of the fabric. The interaction
with the elastic surface completely depends on the tracking information delivered by the Kinect, as no other sensing technology is
involved in the system (Fig. 2, right image).
We extended our existing FlexiWall-Framework (Müller et al.,
2014) to achieve a precise tracking of surface deformations. The
former implementation of the depth interaction followed a simple
principle: Data was organized into several layers and the depth
image delivered by the Microsoft Kinect was transformed into a
greyscale image, where every color tone represented a speciic
depth value. Based on this texture, a pixel shader blended the
different data layers into each other. As this happened frame by
120
frame, the visualized image responded seamlessly to deformations. As the texture blending is done on the graphics card, this
approach is fast and accurate. The problem was that the interaction heavily depends on image content. Only the depth direction
was interactive. So it was not possible to drag items over the surface or rearrange things.
The current implementation includes a basic inger tracking,
based on the deformation of the surface. The system does not
support detection of touch; all computations of physical forces
rely upon accurate information about local minima and maxima
formed by the current shape of the surface. These are reconstructed by computing the partial derivatives of the depth values
in horizontal and vertical direction. For performance optimization purposes, a down-scaled version of the depth image is used
for the derivatives.
6 Discussion
Fig. 8 Semantic zoom for displaying
visualization details: when pushing
slightly, the name is shown.
Applying more pressure the
associated image pops up
and the description is revealed.
The presented system represents a new way of exploring large and
extensive data sets by applying a basic physical model as manipulation technique. Facilitated by the possibility to use the deformation of the fabric to sculpt the interaction space, exploring the
facets and their content generates a playful experience. On the
technical side, the system itself is sometimes lacking in terms of
responsiveness and suffers from the small resolution of the Kinect
sensor and artifacts resulting from the down-scaling of the depth
image which results in a loss of precision for deformation reconstruction. Smoothing the minima and maxima as well in position
al and temporal domain does help to increase the accuracy, but
at the same time reduces responsibility of the system. This does
not severely affect the interaction when selecting tags and iltering entities. However, pushing items to reveal detail- information
and stepping through the different semantic zoom states (cf. Fig.
8) can be inconvenient, due inaccurate position detection and
tracking lags.
121
However, most problems are compensated by the additional
interaction dimension, which makes it extremely easy to adjust
results even if you have to recover a former state after a tracking error. The concept focusses on playful exploration and basic
selection tasks which suits the rather imprecise but intuitive
interaction style. Users easily learn the concepts of the system
by playing with it – due to its reactiveness and limited feature set.
As the current implementation recognizes a fair amount of local
extrema, collaborative use is another feature (or even requirement, depending on the complexity of the data set) of the elastic
surface. As selecting or deselecting three or four tags at the same
time is dificult for one user alone and due to the dynamic of the
system, the elastic surface necessitates collaboration for complex
selection or iltering operations.
7 Lessons Learned
As the technical basics for the FlexiWall and its predecessor, the
DepthTouch, are nearly equal, the application can be deployed for
both systems. However, the orientation of the interactive surface
plays an important role. As the DepthTouch is a Tabletop with an
elastic surface, one problem of the current implementation is the
orientation of the title and the objects description. People interacting with a Tabletop usually stand around the table, so displaying text is problematic when the position of the user is unknown.
The FlexiWall as vertical screen beneits from its inherent bottom-up orientation. Text orientation does not represent an issue
here. On the other hand, the concept of gravity may be a easier to
understand on a tabletop, as the pushing and pulling direction
coincides with its direction. Therefore, the abstraction of forces
between objects can be easier deduced from the natural direction
of gravity.
Material stiffness and size of the elastic surface are further
points of interest. Interacting with the large fabric on the FlexiWall deforms the whole surface. Fine adjustments or pushing/
pulling objects nearby are dificult due to the size of the projection area. A stiffer material helps to increase positional accuracy
and reduces the inluence on other points. This has an impact
on the collaborative use, as small interference between different
locations of deformation allows more users to interact with the
surface simultaneously and therefore more complex ilters to be
created.
Observations of test users show that one often demanded feature is the opportunity to preserve distinct states, e.g. save the
current deformation to select additional items, or retrieve detail
information of all currently selected items without losing the
122
current coniguration of tags or items. The core idea is quite obvious, but the consequences are extensive: As state of the physical
surface cannot be reserved (or restored later), saving the virtual
state breaks this strong connection between the physical display
surface and the forces based on its deformation. The question
arises whether such a system is still easy to understand, and how
large the differences between physical state and virtual state can
become, before the user cannot link visual representation and
physical/haptic experience anymore.
In combination with demands to save internal states, users
often also mention dedicated gestures to trigger complex actions,
system command or execute special operations on the data set.
The diversity of possible gestures on and with the surface (twisting, bending, lip, bi-manual gestures, speed and size of gesture)
offers many options for gestures. Possible (simple) gestures include
wipe- gestures to put items to the side or pinch-like gestures to
zoom into the visualization. However, these gestures again represent another level of abstraction and have to be learned before
being usable. Although gestures add expressiveness to the system,
its increased complexity limits the intuitive, playful use of the
system.
As mentioned in section 4, the problem of the “irst encounter” remains unsolved. Providing affordances for touching and
pushing the surface may require physical extensions of the screen.
One idea could be magnetic handles (e.g. made of semi-transparent plastic) which are attached to the elastic surface.
A more technical issue is the correction of the image distortion resulting from deforming the screen. While this distortion
is not really annoying the users interacting with the system, the
discrepancy of visual representation and tracking position poses
a severe problem, especially when interacting with items located
near the border of the screen.
A inal observation relates to the response time of the physical simulation. We decided to break the physical rules at certain
points to ensure a luent interaction. Once selected, Tags remain
on their position until the user releases them. The same applies
to the selection of content items: Is one of these selected the
whole simulation is stopped. These two adjustments are needed
to introduce a time delay when the system recognizes a deselection of an object. In order to simplify the recovering from tracking
errors, forces on objects are reduced for a small amount of time,
so that the user can easily reselect an object if he or the system
loses track of an item.
123
8 Conclusion
In this paper we presented a system to explore faceted data like
the DelViz data set. As an interaction device we use the elastic displays DepthTouch and FlexiWall. The advantages of these
elastic displays are the haptic feedback and intuitive interaction
techniques. Typical gestures like pushing and pulling the fabric are
used to select tags and data items, which react corresponding to the
underlying physical simulation. Items are attracted or repelled and
thus allow a fast understanding of the data and its structure by recognition of movement patterns. The amount of force used on the
elastic membrane directly translates to force in the simulation. The
more pressure is applied; the stronger tags and items react to each
other. Additionally the items present more information as force is
used to zoom semantically into items. Further on we discuss the
technical properties and problems of the system. It allows fast
interaction and comprehension, but lacks the precise detection of
movement and discrimination of proximal touches.
We will try to keep the interaction as simple as possible and
mainly work on detection and aesthetical problems in the near
future. We want to incorporate technical improvements for more
precision and try advanced algorithms for better tracking. Afterwards we would like to conduct user studies to validate the exploration concept and especially the advantages and disadvantages
of the system.
Acknowledgements.
On behalf of Mathias Müller this work has been funded by the
German Federal Ministry of Education and Research (project no.
16SV7123).
References
Agarawala, Anand, and Balakrishnan, Ravin. Keepin’it real: pushing the
desktop metaphor with physics, piles and the pen. In Proc. CHI ‘06. ACM,
New York, 2006: 1283 - 1292.
Bang, Hyunwoo, and Heo, Yunsil. Cloud Pink. http://everyware.kr/
home/cloud-pink/. 2011. Last accessed:31-01-2015.
Bang, Hyunwoo, and Heo, Yunsil. “Soak, Dye in light.”. http://everyware.
kr/home/soak/. 2011. Last accessed:31-01-2015.
Cassinelli, Alvaro, and Ishikawa, Masatoshi. Khronos projector. In
SIGGRAPH 2005 Emerging technologies ACM, New York, 2005.
124
Franke, Ingmar S., and Müller, Mathias, and Gründer, Thomas, and
Groh, Rainer. FlexiWall: Interaction in-between 2D and 3D Interfaces. In
Proc. HCII 2014, Springer, Berlin 2014: 415-420.
Gründer, Thomas, and Kammer, Dietrich, and Brade, Marius, and
Groh, Rainer. Towards a Design Space for Elastic Displays. In CHI 2013
Workshop: Displays Take New Shape, ACM, New York, 2008.
Jacob, Robert J.K., and Girouard, Audrey, and Hirshield, Leanne M.,
Horn, Michael S., Shaer, Orit, and Solovey, Erin T., and Zigelbaum,
Jamie. Reality-based interaction: a framework for post-WIMP interfaces.
In Proc. CHI ‘08. ACM, New York, USA, 2008: 201-210.
Keck, Mandy, and Kammer, Dietrich, and Iwan, René, and Taranko,
Severin and Groh, Rainer. DelViz: Exploration of Tagged Information
Visualizations. Berlin, 2011: Informatik 2011 - Interaktion und
Visualisierung im Daten-Web.
Lima, Manuel. Visual Complexity. http://www.visualcomplexity.com/vc/.
Last accessed: 31-01-2015.
Peschke, Joshua, and Göbel, Fabian, and Gründer, Thomas, and Keck,
Mandy, and Kammer, Dietrich, and Groh, Rainer. DepthTouch: An
Elastic Surface for Tangible Computing. In Proc. AVI 2012, ACM, New York,
2012: 770–771.
Sherwood, Aaron, and Allison, Mike. Firewall. http://aaron-sherwood.
com/works/irewall/. 2012. Last Access: 2014-12-24.
Sterling, Bruce. Augmented Reality: Kreek Prototype 2.0, Kinect-controlled
interface. By Stephanie Paeper, Daniel Dormann, Lukas Höh and
Mathias Demmer (klangiguren.com). Wired Beyond the Beyond, 2012:
http://www.wired.com/2012/05/augmented-reality-kreek-prototype-20-kinect-controlled-interface/. Last access: 2015-01-31.
Troiano, Giovanni. M., and Pedersen, Esben. W., and Hornbæk,
Kasper. User-deined gestures for elastic, deformable displays. In Proc. AVI
2014. ACM, New York, 2014: 1-8.
Watanabe, Yoshihiro, and Cassinelli, Alvaro, and Komuro, Takashi,
and Ishikawa, Masatoshi. The deformable workspace: A membrane
between real and virtual space. In Proc TABLETOP 2008. IEEE, New York,
2008: 145-152.
Yun, Kyungwon, and Song, Junbong, and Youn, Keehong, and Cho,
Sungmin, and Bang, Hyunwoo. ElaScreen: exploring multi-dimensional
data using elastic screen. In CHI’13 Extended Abstracts. ACM, New York,
2013: 1311-1316.
xCoAx 2015
Computation
Communication
Aesthetics
and X
Modelling Media, Reality
and Thought: Ontological
and Epistemological
Consequences Brought
by Information Technology
Rodrigo Hernández-Ramírez
Glasgow
Scotland
2015.xCoAx.org
Faculty of Fine Arts, University of Lisbon, Portugal
[email protected]
Keywords: computational media, information technology,
ontology, philosophy, third culture.
Computers are our ultimate modelling machines. In the last
decades, they became our irst “metamedium”; the foremost
means through which we generate, store and exchange media,
but also our primary instruments for thinking. As a consequence,
these “quintessential” products of information technology forever
altered the way we think about reality, the world and ourselves.
This paper argues our limited understanding of such transformations is one of the major impediments for developing adequate
descriptive models for computational media. By showing how
information technology is “re-ontologising” our world and stimulating a “permanent beta” attitude within contemporary technological culture, this paper shows that, without an adequate
reformulation of our ontological commitments, our future analyses of media will be signiicantly hindered. By focusing on the
metaphysical implications of current technological development,
this paper shows the often neglected overlapping between philosophy and media analysis, but also the theoretical beneits of
promoting it.
126
1 Introduction
In less than four decades, computers, the “quintessential information technology product” (see Floridi 2009) went from being
highly specialised tools to become multi-purpose instruments
present across every conceivable area of human activity. Having
a seemingly endless range of applications, computers turned into
our foremost “intellectual tools” (see Dyson 1997) and “media
machines” (Kay and Goldberg 2003). As such – and to paraphrase
Lev Manovich (2013), they have “taken command” of virtually all
forms of communication and representation, thus becoming our
irst metamediums. Through this ‘digital revolution’, technology
at large has been recognised as a crucial factor for social and cultural change and even embraced as a form of culture in and for
itself (see Kelly 1998). Within this ‘post-digital’ setting, understanding contemporary media implies analysing the contents,
reception and social effects of audio-visual communications, and
understanding the history, functions and idiosyncrasies of the
instruments responsible for generating them. Given the constantly evolving nature of information technology and the profound ways in which it has transformed our view of the world and
ourselves, this is far from being a simple task.
Over the last decade, media analyses experienced important changes. While Bolter and Grusin’s (2000) Remediation still
remains an even tempered response to the hype sparked by ‘new
technologies’ – in particular the early preiguration of Virtual
Reality (VR) – their overall analysis ended up reducing ‘new media’
to little more than “refashioned” representations of traditional
media. In contrast, pragmatist models began to shift their focus
away from the contents and discourse of audio-visual representation and towards their technical aspects and history (see Kittler
1999; 2009). Unlike the dominating traditions within the humanities, these approaches no longer dismiss the possibility of technological autonomy and agency as ‘deterministic’ ideas. More to
the contrary, they belittle the humanities’ traditional disregard
for technical knowledge (see Fuller 2008) – in particular of programming – and advocate for incrementing our overall computational literacy (see Mateas 2005; Hayles 2002), and for the recognition of software as the new dominant medium (see Manovich
2013). Finally, inluenced by the philosophy of technology and
video game studies, new transdisciplinary models are beginning
to explore the relationship between computational technology
and philosophical analysis (see Bogost 2012; Gualeni 2014), concentrating on the metaphysical problems brought about by computational tools and media.
127
1 The most obvious being that we
now conduct most of our thinking
through the very instruments we
are attempting to describe.
2 According to Kevin Kelly (2010),
the only classical treatise where
the construction technelogos
appears – albeit only a handful
of times and with a rather unclear
meaning – is Aristotle’s Rhetoric.
3 Even Vannevar Bush (1945), in his
inluential article, As we may think,
refers to his imaginary artifacts
as “machines” or “instruments”
and not as ‘technologies’.
4 A quick search in Google’s Ngram
Viewer shows that before the 1920s
the term is virtually inexistent. In
the following decades its usage
experiments a steady growth until
1952, where the curve shows a
dramatic surge.
5 The rather nebulous term ‘new
technologies’ is a telling example
of such tendency.
The fact that computational technology became our primary
information medium – that is, the foremost means through which
we generate, store and communicate our thinking, has deep epistemological and ontological consequences1. This paper argues
that our limited understanding of those consequences and their
implications constitutes one of the major impediments for developing effective descriptive models for contemporary media. Following Luciano Floridi’s (2010) description of the “information
revolution”, Kevin Kelly’s (1998) portrayal of the “third culture”,
and various accounts belonging to the philosophy of technology,
this paper will describe some of the most salient ontological and
epistemological changes introduced by computational technology. The analysis begins by arguing why information technology
is effectively “re-ontologising” our world, before making the case
for why our traditional approaches to computational media have
failed to recognise these metaphysical shifts. Finally, the analysis
closes by showing why, given its role as our very irst intellectual
metamedium, the computer has become our ultimate modelling
machine. The overall, theoretical implication stemming from this
analysis is that – without an adequate reformulation of our ontological commitments – it will be increasingly complicated to generate adequate critiques of computational technology and media.
2 Technology’s role
The word ‘technology’ is nominally Greek2, but the concept itself
is a (relatively) recent invention. Johann Beckmann, a German
professor of economics who realised the tools and techniques
used by all trades were not a haphazard collection of unrelated
artefacts, but elements of an interconnected system, coined the
term technologie in 1802 (see Kelly 2010). Despite the importance
of such inding, both the newly minted concept and the systemic
nature of technology would remain obscure notions3 for the following 150 years4. The resurgence of the term coincided more or
less with the dawn of the ‘computer age’; a circumstance that may
partially explain why, for many media scholars, ‘technology’ was
(and still is) synonymous with computational devices. Being a
“functional category” (Levitin 2014) technology refers to all sorts
of artiicial devices and techniques, but the ‘technology ≈ computer’ equation continues to resurface every now and then in contemporary media analysis5. This incidence cannot be attributed
solely to some media theorists’ reluctance to clarify what they
mean by technology, but to the inherent haziness of the term and
the multifarious nature of the phenomena it refers to.
Devising tools is an intrinsically (although not exclusively)
human trait; we have been doing it for over three million years
128
(Wong 2015), and it would be dificult to dismiss its importance
for our evolutionary success. It is precisely this role and how it
relates to ‘culture’ at large what makes the deinition of technology such a complex problem6. Cultural and technological change
are quite dificult to trace, and establishing which one exerts more
inluence over the other at any given time is equally troublesome.
For the critical theory and cultural and literary studies traditions,
there is little point in attempting to do so, since, in their views,
technology is but a manifestation of culture. Consequently, they
dismiss any suggestion that technology might be an autonomous
agency capable of inciting social change without direct intentional
involvement of a human subject as a “deterministic” (Bolter and
Grusin 2000; see also Dusek 2006) or “reductionist” idea. That
technology has indeed agency and is well beyond human control
are precisely the views of thinkers such as Friedrich Kittler (1999;
2009; Gane 2005) and Kelly (2010). For its part, middle-ground
positions between “cultural determinism” (see Dusek 2006) and
‘technological determinism’, portray technologies as “hybrid”
systems (Ihde 2009; Latour, 1993) comprising “hardware” (tools
and machinery), “software” (institutions, ideas, customs), and
the agents that apply them (see Dusek 2006; Li-Hua 2009). Under
this view, potentially “every creation system beyond the basic
apparatus of the body” (Wilson, 2009, 9) qualiies as technology.
With the arrival of the PC, Internet, mobile communications
and other information technologies, humanity thrust itself into
a revolution (Floridi 2010) with profound cultural, social and philosophical consequences. In terms of aesthetic creation, the evolution of mainframes into “media machines” (Manovich 2013;
Kay and Goldberg 2003) brought about a signiicant shift in the
way we produce and understand audio-visual communications
and art. Software’s ‘ability’ to simulate most previously distinct
physical media and its tools (Manovich 2013) calls into question
the adequacy of the ‘medium’ as a descriptive category. With digitisation came the inevitable loss of materiality; and theoretical
approaches that relied on the ‘objectness’ of aesthetic artefacts
found themselves engaging a new form of presence. Overall, the
introduction of computational devices implied that aesthetic
analysis would have to engage technology from a theoretical
standpoint and assume that this dimension of cultural production could not continue to be ignored and treated as an alien province reserved for science and engineering.
6 For a more complete overview of
the various deinitions of technology
see: “Dusek (2006), Verbeek and
Vermaas (2009) and Ihde (2009).”
3 Theoretical approaches to ‘new media’
Heavily inluenced by the critical theory and cultural studies
traditions, early analyses of computer-generated media tended
129
to focus solely on their hidden dynamics and possible social
effects, while disregarding the technical conditions which bring
them to life. Amongst the most well known accounts stemming
from this tradition is Bolter and Grusin’s (2000) “remediation”7
model, which essentially claims that there is no meaningful difference between traditional (electronic) and so-called ‘new media’
because both constitute ‘remediations’ of previous forms of representation. In their view, new media is but a “refashioning” of
old media and therefore shares the same goal as all forms of representation since the Renaissance: to “put the viewer in the same
space as the objects viewed”8 (Bolter and Grusin 2000, 11) while
simultaneously concealing the factuality of their intermediation.
Other models focus instead on the “material structures” (see
Gane, 2005) – i.e., on the tools – responsible for generating media.
Unlike their content-centred counterparts, these approximations
no longer regard the idea of technological agency as anathema.
Their views can be traced to the pragmatist tradition, particularly,
the notion that theory and practice work together9 (see Haack
2003) and, consequently, that our knowledge of the world is mediated by our instruments as much as by our concepts. Friedrich
Kittler (1999, 2006), a vocal critic of “anthropocentric”10 interpretations of media, was perhaps one of the most inluential igures
within this ‘camp’. He believed that technology ought to be critically analysed (see Gane 2005) precisely because it is increasingly
beyond human control. Kittler’s overarching arguments portray
7 Which they admittedly built upon
McLuhan’s (1994, 8) claim that the
media technology not just as objects of representation, but also as
“‘content’ of any medium is always
mediators of information. His approach consisted in understandanother medium”.
ing media by analysing the historical and technical conditions
that surround their production.
8 Which, to a certain extent, is a
For its part, software studies – a relatively recent tradition,
rehashing of Heidegger’s claim that
which advocates for a richer understanding of the history and
the invention of the radio answered
idiosyncrasies of computational technology, tacitly endorses
“man’s existential tendency to ‘dedistanciate’, to diminish distances”
technological agency while it chastises the humanities’ for their
(Kittler 2009, 29).
insistence on dismissing the importance of programming and
computational culture. More or less in the same tone, scholars
9 Because, in their view, the
such as Michael Mateas (2005) and Matthew Fuller (2008) argue
meanings of concepts become clear
‘procedural’ knowledge should not be regarded by the humanities
precisely as a result of their practical
as the exclusive domain of science and engineering, but embraced
implementation, otherwise, they
remain ungraspable abstractions. For
at large as a new form of literacy. As a theoretical approach, softpragmatists, models that dismiss the
ware studies aim to understand contemporary media through the
active role of practice (and hence, of
technical instruments) are inherently speciic technology responsible for generating it. Subscribing to
suspicious (see Haack 2003).
the views of early personal computing pioneers, they regard the
computer as the irst “metamedium” (see Kay and Goldberg 2003;
10 He was particularly critical of
Manovich 2013) and software itself as the indisputable ‘place’
McLuhan’s portrayal of media as
of contemporary media creation. Contrary to the remediation
“extensions” (see Gane 2005, 28).
approach, they do recognise a fundamental distance between
130
traditional and computational (‘new’) media. They argue that the
constantly evolving language of contemporary audio-visual artefacts is symptomatic of software’s idiosyncrasies, in particular,
of its ability to simulate virtually all previously distinct forms of
media, their tools and techniques (see Manovich 2013).
Although philosophical speculation on technology has been
more or less present for various centuries, it was not until the
1970s that it became fully recognised as a particular branch of
philosophical inquiry (see Dusek 2006) – whether the rise of computational technology played a signiicative role in this process or
not is a matter open to speculation. For philosophy, computers are
deeply transformative devices, not only because they played a fundamental role in the development of contemporary theories of the
mind (see Pinker 1998), but also because the prospects of attaining AI and VR have serious implications for most long-standing
philosophical areas. In particular for those concerned with existence, knowledge, life, mind, and ethics. The philosophical outlook on computational technology has attracted the attention of
various media and video game scholars (see Bogost 2012), given
that many aspects of programming and information systems deal
with metaphysical problems and this, evidently, is a distinctively
philosophical area of enquiry. Moreover, this interaction has
given rise to various forms of cross-fertilisation leading scholars
to regard computational technology as useful appliances for conducting philosophical research (see Gualeni, 2014).
4 A conceptual framework
4.1 The third culture
Over the last decades, technology has not only been recognised
as a deining aspect of human culture but – as Kevin Kelly (1998)
would argue – as a form of (pop) culture based on technology and
for technology. Kelly describes this “third” or “nerd” culture as
an “offspring of science” which, unlike its forefather, does not
seek to discover ultimate truths about the Universe but generate “experience and novelty” through technological development.
Although by no means a fully-ledged descriptive model, Kelly’s
characterisation offers a thought-provoking basis to describe not
only the social consequences of widespread technological adoption but also – more importantly – the epistemological shifts that
accompany it. Kelly credits C. P. Snow (2000) with already having
described a middle-ground culture capable of bridging the gap
between the two supposedly irreconcilable cultures described
in his (infamous) lecture. Kelly, however, loses the moralising
tone and modernist idealisations that pervade Snow’s work, and
131
describes this cultural tendency as an overlapping of scientiic
and engineering outlooks fuelled by an unrestrained desire to
generate experiences – an attitude which, we should note, closely
resembles vanguard’s artistic experimentation. As described by
Kelly, the third culture is rather indifferent to theoretical restrictions, boundaries and credentials. Consequently, it favours ‘transdiciplinarity’ and ‘remixability’, and is willing to embrace “the
irrational” (Kelly 1998) if it holds the promise of a new experience.
In order to solve a problem members of the third culture would
rather build a functional artiicial model than come up with an
abstract theoretical solution for it.
4.2 Information technology
By deinition, History begins with writing, and writing constitutes the irst means to register information as “non-biological
memory” (Floridi 2009, 227). It follows that writing marks the
irst stage in the evolution of information technology (IT) and
thus, the dawn of the ‘information age’ (see Floridi 2010). IT has
three main functions (Floridi 2009): to register, communicate
and generate information; and each of these functions has dominated the various stages IT has gone through over the millennia. Contrary to what some analysis of media stipulate, with each
novel iteration, IT does not replace its previous incarnations, but
rather incorporates their functions11 – neither analogue, nor digital audio-visual technologies have made writing obsolete. In the
last three decades, computational technology has been steadily
incorporating all the functions that were previously scattered
throughout various dedicated technologies and, in the process,
generating new ways to carry out those same functions. The computer thus constitutes the quintessential IT appliance (see Floridi
2009). As far as functional categories go, ‘information technology’
is no less nebulous than ‘technology’ alone; after all, the former
harbours everything from handwriting, to a magazine or a social
network. Nonetheless IT does indicate what is the practical and
historical common denominator shared by all the entities it refers
to, and what their general functions are. It follows that both computers and ‘media’ (whether analogue or digital) can be described
in general terms as IT.
4.3 Ontology
11 It is fair to remember that
McLuhan (1994) as well as Bolter and
Grusin (2000) have commented on
this idea.
As a speciic branch of philosophy, ontology is concerned with
‘what there is’ (Floridi 2004). To paraphrase Barry Smith (2004),
ontology is a fundamentally descriptive enterprise concerned
with types, kinds, structures, properties, events, processes and
132
relations amongst entities, and with the various interpretations
of reality; it involves “exhaustive classiication” and categorisation
within “all spheres of being”. Unlike science, ontology does not
seek explanation or prediction, but description. Ontology is a core
element of metaphysical analysis (often both terms are treated as
interchangeable) and its preoccupations sometimes overlap with
those of epistemology – which is essentially concerned with ‘how’
we know what we know, and how we can say that such knowledge
is true. In the irst half of the Twentieth Century, logical positivism (the tradition that would give rise to contemporary philosophy
of science) began to promote science as the most effective means
to attain true knowledge of the world and, consequently, began to
dismiss non-scientiic metaphysical speculations as a “meaningless quest for answers to unanswerable questions” (Dupuy 2009,
214). Although this view has become signiicantly less reductive,
ontology evolved into a method for analysing not the ultimate
constituency of reality, but the entities and relations that science
discovered (Proudfoot and Lacey 2010). Ontology hence became
a “metalevel discipline” (Smith 2004) concerned not with the
objects of the world itself, but with the objects within the various
systems of belief (theories) that frame our views of the world.
In the last decades, ontology became an important aspect of
computer and information science and a fanciful means to refer to
a “conceptual model” (Smith, 2004) tasked with describing objects
(entities, modules, etc.) and their relationships within artiicial
information systems. Ontology in this sense is not concerned
with the dynamics of alternative possible worlds. An ontology is
thus a system containing descriptions, deinitions, rules, taxonomies, and axioms that establish a framework for representing
certain kinds of structured information within a system that may
or may not interact with other systems (see Smith 2004). Outside
of this specialised usage – albeit, still related to it, ontology could
be seen as the method through which we categorise and make
sense of the entities that surround us. In recent years, however,
this translated into making sense of increasingly overwhelming
amounts of information and the various forms in which it is generated. Thus, ontology implies not merely describing, but inding
ways to organise, iltrate and discern the very things that inform
and mediate our views of the world.
133
5 Ontological and epistemological changes
5.1 Computational technology is
“re-ontologising” our world
Information technology (IT) is driving a revolution comparable
to those initiated by Copernicus and Darwin (see Floridi 2010).
This process, however, does not entail that we will all turn into
cyborgs, or that virtual environments will supplant physical reality. In order to understand why and how this revolution is coming
about, we could, as Floridi (2010) suggests, begin by distinguishing between ‘enhancing’ and ‘augmenting’ devices. An enhancement technology works in the cybernetic sense of extension and
control (Dupuy 2009) (categories that include anything from spectacles to prosthetics). For its part, an augmenting device is one
that allows users to interact with “different possible worlds” (Floridi 2010) (a microscope or even the Mars rovers would certainly
it within this category). Now, computational technology does
not enhance or augment in the senses just mentioned because it
allows users to enter an alternate environment – an “infosphere”
(Floridi 2010) – in which they may interact with other human (and
perhaps, eventually non-human) users.
Most of us already spend the better part of our waking hours
within this environment. While our bodies remain ‘tied’ to a physical surrounding, a signiicant amount of our work, leisure and
social activities take place online. But since our gadgets are now
permanently within our reach (either in our pockets or around our
wrists) and the ‘internet of things’ is gradually expanding, the
once meaningful distinction between being ‘online’ and being
‘ofline’ is rendered moot. By allowing us to communicate and
interact with ‘otherworldly’ (Gualeni 2014) objects and environments (which evidently need not be as sophisticated as VR), computational technology is radically altering core tenets of our (still)
modern “Newtonian” understanding of reality (Floridi 2010). One
that – to paraphrase Floridi (2010) – remains populated by “dead”
entities such as cars, buildings and refrigerators; but will gradually become “a-live” (artiicially live) as the world becomes inhabited by animated gadgets controlled by invisible forces – a paradoxical reminder of pre-modern worlds.
5.2 The inadequacies of our ontological
frameworks
Although our world is now illed with artefacts that sometimes
contradict our modern understanding of reality, our theoretical approximations to media remain stubbornly informed by
134
Newtonian metaphysics. Whenever we download music or move
it within or across devices we know we are not handling actual
physical objects. What we ‘download’ and exchange are instructions that ‘tell’ our storage units to assume a particular magnetic
coniguration. Having spent most of our modern existence surrounded by physical objects, dealing with these abstract entities
becomes cognitively taxing, hence, we devised visual and conceptual metaphors that allow us to handle and think about them as
if they were in fact physical objects. For daily transactions, thinking about various types of digital ‘iles’ and ‘folders’ is useful
because it spares us the cognitive strain caused by metaphysical
ambiguities. However, if what we are trying to do is understand,
describe and criticise these entities, the otherwise helpful metaphors become an obstacle, because they make it seem as if digital
and ‘a-live’ entities could be approached with the same ontological framework as traditional media.
Nowadays, computational technology (and technology at large)
is often described through biological metaphors. Terms such as
‘environment’, ‘hybrid’ and ‘evolution’ are increasingly common
across information sciences and media analyses. That media
scholars turn to the natural sciences for nomenclatures is symptomatic of the absence of an adequate ontological model for digital media artefacts, and of their tacit recognition of the growing
‘a-liveness’ of technology at large12. Computational technology
clearly brings about phenomena that we do not know how to categorise; hence, we lack a précising deinition (i.e., one that goes
beyond a mere dictionary description or an awkward neologism).
To a great extent, this means that we have not yet found a proper
place for computational media within our conceptual framework.
If these circumstances remain unchanged, our ability to describe
ever more complex information (and thus media) systems will be
signiicantly hindered.
5.3 Modelling machines
Computers are no ordinary instruments. Thanks to software’s
“permanent extendibility” and “modularity” (see Manovich 2013)
they have become our irst multi-purpose appliances: instruments for science and engineering, but also “media machines”
(Kay 2003; Manovich 2013) and entertainment centres. Computers are “intellectual tools”(Dyson 1997), which means they are
not merely transforming how we do and create things, but how
12 The seemingly unavoidable arrival
we think and understand the world, and ourselves. Like writing
of AI, is bringing back old romantic
(our irst information technology) they, are not simply means
metaphysical questions and anxieties
to enhance our memory, but to externalise it, to process and to
(as recent calls to action by Elon
Musk and Stephen Hawking show).
communicate our thinking. Unlike writing the results of this
135
thinking can be objectiied beyond interpretable code. Computers are modelling appliances that rely on information – i.e., wellformed, meaningful and truthful data (see Floridi 2004) – as their
raw material. Provided that someone is capable of formulating an
adequate algorithmic translation of a problem, a suficiently powerful computer will be capable of generating a simulation through
various forms of perceptible outputs. In other words, computers
make abstractions tangible in a way that no other technology can.
Because of them, our ideas are progressively less constrained to
the limits of our ‘mind’s eye’ or by the limitations imposed by
laborious analogue representations. As epistemological tools,
computers both augment and permanently extend our minds.
By altering our epistemological boundaries (by turning the
notion of ‘medium’ into a mere operational category and by
encouraging transdisciplinary approaches) computational technology has forever changed the way we structure knowledge.
Computers are radically transforming not only how we regard
certain phenomena within a demarcated scientiic ield, or how
we communicate and entertain ourselves and represent the world;
they are changing our view on reality itself. They are transforming
how we understand perception and experience, two fundamental
aspects for all human activities, in particular for aesthetic creation. For all the ways computers are changing art and media, the
most profound are not necessarily those associated with practical
matters; but those resting at an intellectual level. Our theoretical
dificulties do not originate solely on media theory’s long standing neglect of technology as an object of analysis, but in a deeper
handicap affecting all human disciplines. The fact is we simply
don’t know what entities such as software, data and information
are, and to what category of ‘objects of the world’ they belong.
6 Some implications
By allowing us to build all conceivable kinds of models, computational technology has given rise to a new epistemic stance based
not on theoretical models, but on tinkering; a kind of permanent
beta attitude, which regards experience and artefacts as always
susceptible to upgrades. Fully embraced by the third culture, this
attitude and the instruments enabling it are shifting our epistemological protocols and boundaries, forcing us to rethink the way
we structure and categorise our knowledge. With the computer as
a primary tool, “nerd culture” is blurring the lines between craft,
art and engineering and thus wreaking havoc amongst traditional disciplines by rendering their theoretical models anachronistic. The problems brought about by computational technology
are not so much theoretical as they are practical. The current is
136
a promissory age for artists and engineers, but a complex one for
theoreticians planning to keep up with their creations.
In light of such transformations, it is clear that media analyses are bound to revise both their theoretical and methodological frameworks. Given the constantly evolving nature of information technology and the permanently extendible character of
the media it produces, a potential descriptive model requires the
same degree of lexibility and extendibility. Nonetheless, this
requires a strong ontological commitment, a core architecture
over which to proceed and build future analyses; a kind of lexible
‘source code’ able to withstand extreme ‘debugging’ without falling apart. A good starting point would be to situate contemporary
media within a larger critique of the phenomena responsible for
producing it: information technology.
Conclusions
Our limited understanding of the metaphysical consequences
brought about by information technology is one of the major
impediments for developing effective descriptive models for contemporary media. This problem is further complicated by the fact
that computational technology itself is extremely dificult to characterise, since we lack a functional category in which to place this
unprecedented form of “engineering”. For media and art theorists,
computational aesthetic artefacts thus present a rather dificult
object of analysis. On the one hand, new approaches have to overcome the humanities’ traditional refusal to engage technology
and computation beyond a supericial critique. On the other hand,
art and media theory need to establish new epistemic compromises that would allow them to know, at least temporarily, what
type of objects they are dealing with. Software-centric approaches
go a long way towards explaining the working and history of the
tools responsible for generating media. The latter problem could
be better engaged by turning to philosophical approaches since
they are already concerned with trying to generate appropriate
models to fathom the actual nature of the artefacts transforming
our world and ourselves in such profound and irreversible ways.
References
Bogost, Ian. 2012. Alien Phenomenology, or What It’s Like to Be a Thing.
E-book. Minneapolis: University of Minnesota Press.
Bolter, Jay David, and Richard Grusin. 2000. Remediation. Understanding
New Media. First paperback. Cambridge, Massachusetts: The MIT Press.
137
Bush, Vannevar. 1945. “As We May Think.” The Atlantic. Retrieved
from http://www.theatlantic.com/magazine/archive/1945/07/
as-we-may-think/303881/.
Dupuy, Jean-Pierre. 2009. “Technology and Metaphysics.” In A
Companion to the Philosophy of Technology, edited by Jan Kyrre Berg
Olsen, Stig Andur Pedersen, and Vincent F. Hendricks, 214–17.
Massachusetts; Oxford: Blackwell Publishing.
Dusek, Val. 2006. Philosophy of Technology: An Introduction.
Massachusetts; Oxford: Blackwell Publishing.
Dyson, Freeman. 1997. Imagined Worlds. Cambridge, Massachusetts:
Harvard University Press.
Floridi, Luciano. 2004. “Information.” In Philosophy of Computing and
Information, edited by Luciano Floridi, 14:40–61. Blackwell Philosophy
Guides. Oxford: Blackwell Publishing.
———. 2009. “Information Technology.” In A Companion to the Philosophy
of Technology, edited by Jan Kyrre Berg Olsen, Stig Andur Pedersen,
and Vincent F. Hendricks, 227–31. Massachusetts; Oxford: Blackwell
Publishing.
———. 2010. Information: A Very Short Introduction. E-book. Oxford; New
York.
Fuller, Matthew. 2008. “Introduction, the Stuff of Software.” In
Software Studies: A Lexicon, edited by Matthew Fuller. Leonardo Series.
Cambridge, Massachusetts: The MIT Press.
Gane, Nicholas. 2005. “Radical Post-Humanism: Friedrich Kittler and
the Primacy of Technology.” Theory, Culture & Society 22 (3): 25–41.
doi:10.1177/0263276405053718.
Gualeni, Stefano. 2014. “Augmented Ontologies or How to Philosophize
with a Digital Hammer.” Philosophy & Technology 27 (2): 177–99.
doi:10.1007/s13347-013-0123-x.
Haack, Susan. 2003. “Pragmatism.” In The Blackwell Companion to
Philosophy, edited by Nicholas Bunnin and E.P. Tsui-James, 774–89.
Oxford, England: Blackwell Publishing.
Hayles, N. Katherine. 2002. Writing Machines. Mediawork Pamphlet.
Cambridge, Massachusetts: The MIT Press.
Ihde, Don. 2009. “Technology and Science.” In A Companion to the
Philosophy of Technology, edited by Jan Kyrre Berg Olsen, Stig Andur
Pedersen, and Vincent F. Hendricks, 51–60. Massachusetts; Oxford:
Blackwell Publishing.
Kay, Alan, and Adele Goldberg. 2003. “Personal Dynamic Media.” In The
New Media Reader, edited by Noah Wardrip-Fruin and Nick Montfort,
392–404. Cambridge, Massachusetts: The MIT Press.
Kelly, Kevin. 1998. “The Third Culture.” Science 279 (5353): 992–93.
doi:10.1126/science.279.5353.992.
———. 2010. What Technology Wants. E-book. New York: Viking.
138
Kittler, Friedrich A. 1999. Gramophone, Film, Typewriter. Edited by
Timothy Lenoir and Hans Ulrich Gumbrecht. Translated by Geoffrey
Winthrop-Young and Michael Wutz. Writing Science. California:
Stanford University Press.
———. 2009. “Towards an Ontology of Media.” Theory, Culture & Society 26
(2-3): 23–31. doi:10.1177/0263276409103106.
Latour, Bruno. 1993. We Have Never Been Modern. Cambridge,
Massachusetts: Harvard University Press.
Levitin, Daniel J. 2014. The Organized Mind: Thinking Straight in the Age
of Information Overload. E-book. New York: Dutton.
Li-Hua, Richard. 2009. “Deinitions of Technology.” In A Companion to
the Philosophy of Technology, edited by Jan Kyrre Berg Olsen, Stig Andur
Pedersen, and Vincent F. Hendricks, 18–22. Massachusetts; Oxford:
Wiley-Blackwell.
Manovich, Lev. 2013. Software Takes Command. Edited by Francisco J.
Ricardo. First. International Texts in Critical Media Aesthetics. New
York: Bloomsbury.
Mateas, Michael. 2005. “Procedural Literacy: Educating the New Media
Practitioner.” On the Horizon 13 (Special Issue. Future of Games,
Simulations and Interactive Media in Learning Contexts).
McLuhan, Marshall. 1994. Understanding Media: The Extensions of Man.
Massachusetts: The MIT Press.
Pinker, Steven. 1998. How the Mind Works. London: Penguin Books.
Proudfoot, Michael, and A.R. Lacey. 2010. The Routledge Dictionary of
Philosophy. New York: Routdledge.
Smith, Barry. 2004. “Ontology.” In Philosophy of Computing and
Information, edited by Luciano Floridi, First, 14:155–66. Blackwell
Philosophy Guides. Oxford, UK: Blackwell Publishing.
Snow, C.P. 2000. The Two Cultures. E-book. Cambridge, England:
Cambridge University Press (Virtual Publishing).
Verbeek, Peter-Paul, and Pieter E. Vermaas. 2009. “Technological
Artifacts.” In A Companion to the Philosophy of Technology, edited by
Jan Kyrre Berg Olsen, Stig Andur Pedersen, and Vincent F. Hendricks,
165–71. Massachusetts; Oxford: Blackwell Publishing.
Wilson, Stephen. 2002. Information Arts: Intersections of Art, Science, and
Technology. Cambridge, Massachusetts: The MIT Press.
Wong, Kate. 2015. “Archeologists Take Wrong Turn, Find World’s
Oldest Stone Tools.” Magazine. Scientiic American: Observations. April
15. http://blogs.scientiicamerican.com/observations/2015/04/15/
archaeologists-take-wrong-turn-ind-worlds-oldest-stone-tools/.
xCoAx 2015
Computation
Communication
Aesthetics
and X
Beyond Vicarious
Interactions:
From Theory of Mind
to Theories of Systems
in Ergodic Artefacts
Miguel Carvalhais
Glasgow
Scotland
2015.xCoAx.org
ID+ / Faculty of Fine Arts, University of Porto, Portugal
[email protected]
Pedro Cardoso
ID+ / Faculty of Fine Arts, University of Porto, Portugal
[email protected]
Keywords: Ergodic, Interaction, Simulation, Aesthetics,
Procedural Design, MDA, Vicarious Interaction, Interpretation.
Procedural media allows for unprecedented modes of authorship
and for the development of new aesthetic experiences. As artists and communicators, but also as readers and users of these
systems, we should be aware that their aesthetic potential is not
simply deined by direct interaction. Although direct interaction
is one of the most perceivable components in the relationship
between ergodic media or artefacts and their readers, one should
not forget that the reader’s interpretation and capacity to apprehend and simulate the processes developed within these artefacts
is continuous, ever present and signiicant. In this context, this
paper argues that not only ergodicity does not necessarily imply
direct interaction, but also that non-interactive procedural artefacts are able to allow the development of ergodic experiences,
not through direct interactions but rather through simulated
interactions, by understanding procedural activities and developing mental analogues of those processes. We aim at raising this
awareness, setting up the grounds for designing for what we call
virtuosic interpretation, an activity that may be described as the
ergodic experience developed by means of mental simulations.
140
1 Processor-based media
1 Murray often mentions “virtual
worlds”, a term that, although still
useful, may be dangerous because
of the way how it may ambiguously
describe either the topology of the
text, a procedurally simulated space
or the diegetic spaces within it. More
recently, Nick Montfort (2003) used
the slightly less ambiguous term
simulated world in his analysis of
interactive iction.
Digital technologies are becoming ubiquitous, replacing other
media forms as very economic and reliable alternatives. They are
excellent simulators of other media forms, but maybe because of
this trait, they often fall short of being developed to their highest potential for the creation of new media forms. Therefore, a
complete deinition of digital media should not be solely based
on their digital encoding but also on the fact that, being processor-based, these media forms are also essentially procedural.
Digital media may be developed in either data-intensive or
process-intensive approaches (Crawford 1987), the irst of these
devoting most of the available resources to “moving bytes around”
(Crawford 1987) in artefacts that “are based primarily on pre-recorded sound and/or image sequences, or on static texts or images
that are selected or arranged during the interaction” (Kwastek
2013, 114) and mainly use their procedural capacities to select,
rearrange, compose or give access to these assets. A process-intensive approach tends to produce artefacts where “sound and
image data (…) will be generated in real time according to algorithms” (Kwastek 2013, 114) and where, even when data-intensive
approaches are also used, the focus on procedurality is clear.
So we may emphasize procedurality in designating these media
as procedural rather than simply digital, following Janet Murray’s
irst essential property of “digital environments” (1997, 71) and
her observation that a computer “is not fundamentally a wire or
a pathway but an engine”, designed to “embody complex, contingent behaviors” (1997, 72). As such, and continuing to follow Murray, we should regard authorship in these media as also
being procedural, a mode of authorship where one writes “the
rules by which the texts appear as well as writing the texts themselves” (1997, 152), where one creates “rules for the interactor’s
involvement” and “conditions under which things will happen
in response to the participant’s actions” (1997, 152). This turns
the author into something of “a choreographer who supplies the
rhythms, the context, and the set of steps that will be performed”
(1997, 153), that creates not sets, scenes, or objects, but potential
narratives to be discovered and enacted.1 Procedural authorship
therefore also underlines, and takes advantage of, the “principal
value of the computer, which creates meaning through the interaction of algorithms” (Bogost 2008, 122), an ability that “fundamentally separates computers from other media” (Bogost 2008,
122) and that turns procedural media into a signiicantly different
class of artefacts.
141
2 Interacting
The role of the reader2 of these media is also necessarily affected.
Murray describes how the “interactor, whether as navigator, protagonist, explorer or builder, makes use of this repertoire of possible steps and rhythms to improvise a particular dance among
the many, many possible dances the author has enabled” (1997,
153) and how this leads readers to necessarily adopt something
of a creative role within the system, although this is typically
not a role equivalent to that of the author, or even not enough to
qualify as co-authorship. Rather, Murray prefers to speak about
agency, the power “over enticing and plastic materials” (1997, 153)
“to take meaningful action and see the results of our decisions
and choices” (1997, 112), and distinguishes it from mere activity,
seeing how it “goes beyond both participation” (1997, 128), and
becomes an aesthetic pleasure in itself.
Following Murray, Espen Aarseth (1997) speaks of the ergodic
experience developed in artefacts where multiple user functions
are possible to undertake. These are the omnipresent interpreta2 Among all the possible and often
confusing designations – user, reader,
tive function; the explorative function, in which readers may make
spectator, player, interactor, etc. – we
decisions regarding which spaces of the text’s topology to access;
will use “reader” in this text, albeit
the conigurative function, in which textual contents may be crerecognizing that this also describes a
particular mode of engagement with
ated, selected or rearranged; and the textonic function, when cona medium or artefact.
tents may be permanently added to the text. Aarseth posits that
artefacts where “a cybernetic feedback loop, with information
lowing from text to user (through the interpretative function)
3 Ergodic is a term “appropriated
from physics that derives from
and back again (through one or more of the other functions)” may
the Greek words ergon and hodos,
be described as ergodic.3 Therefore, having thus deined ergodic
meaning ‘work’ and ‘path’. In
texts, we may conceive of other forms of ergodic media, where
ergodic literature, nontrivial effort
is required to allow the reader to
some of the user functions identiied by Aarseth may be developed.
traverse the text.” (Aarseth 1997, 1)
Allowing for interaction and agency, these media forms will be
characterized by a relatively unpredictable usage, with the “string
4 As Markku Eskelinen notes, in
of events that occur during gameplay and the outcome of those
literature, theatre or ilm the
events (…) unknown at the time the product is inished” (Hunicke
dominant user function is the
et al. 2004), and the number of user functions involved, and
interpretative, but in forms as games
it is usually the conigurative
their relative weight in the experience of the media forms may
(Bogost 2006, 108).
vary.4 Hunicke, LeBlanc and Zubek propose that artefacts such
as these5 may be described in terms of three design stages they
call Mechanics, Dynamics and Aesthetics, developed in consecu5 Their MDA framework was
originally developed as “a formal
tive levels during the artefact’s design and discovered in reverse
approach to understanding games”
order by their readers. The perspective of the reader is therefore
(Hunicke, et al. 2004). Games are
opposite to that of the author in any ergodic artefact. The author
undoubtedly ergodic forms and the
MDA framework has been previously
deals primarily with mechanics, “at the level of data representaused by ourselves (Carvalhais 2012b)
and other authors (Ribas 2012; 2014b) tion and algorithms” (Hunicke et al. 2004) and consequently with
dynamics, the runtime behaviour of the mechanics previously
to study interactive and ergodic
media forms.
developed, which will ultimately result, at the aesthetics level,
142
and twice removed from the author, in “the desirable emotional
responses evoked in the player, when she interacts with the game
system.” (2004) Through the user functions, a reader interacts
with the artefact at the aesthetics level, discovering the dynamics
but normally not being able to burrow into the black box of the
mechanics level.
With dynamic and continuously varying outputs that are largely
unknown both to the author and the reader, we may consider the
aesthetic value of interaction. Katja Kwastek notes how in data-intensive artefacts, readers may “seek to activate all the available
assets” (2013, 114) in order to achieve a sense of completeness,
because being used to linearity and completion in most media,
we may also be “inclined to want to experience the ‘whole’ of a
work” (2013, 114). In process-intensive artefacts, completeness
may be found in exhausting “the underlying algorithms and the
possibilities for interaction offered” (2013, 114), with the focus of
the readers shifting from traditional aesthetics to an aesthetics of
interaction and of performance (Ribas 2014a). This is particularly
noted when readers are not engaged directly with the artefact
but rather observe other readers during their interactions, a situation deined as “vicarious interaction” (Levin 2010). Of course
that “sensual or cognitive comprehension can still take place in
these cases” and the observer may discover “relations between
action and effect, even if he is not actively involved”, not developing the same experience as an active interactor, but being “able
to observe and understand interaction processes that he would
not have carried out” (Kwastek 2013, 94). Furthermore, the actual
performance of the interactor may also be a factor to consider aesthetically, as Siegfried Zielinski discussed (2006, 138).
3 Not interacting
Given a machine for producing text, there can be three main positions of
human-machine collaboration: (1) preprocessing, in which the machine
is programmed, conigured, and loaded by the human; (2) coprocessing,
in which the machine and the human produce text in tandem; and (3)
postprocessing, in which the human selects some of the machine’s effusions and excludes others. These positions often operate together: either
1 and 2; 1 and 3; or 1, 2, and 3; or 1 by itself, although the human operator
need not be the same in different positions. (Aarseth 1997, 135)
All three of Aarseth’s positions for collaboration require some
direct human-computer interaction. His deinition of ergodic text
(or, by extension, an Aarseth-based deinition of ergodic artefact)
requires interaction with the human reader. Therefore, non-interactive media, even if processor-based, may be dificult to classify
143
as ergodic. In non-interactive artefacts – and, to an extent, in
non-interactive states of otherwise interactive systems – the
reader is apparently limited to the interpretative function and
barred from developing any of the functions necessary to the
ergodic deinition. We however posit that a broader – and procedural – understanding of the nature of the interpretative function,
may allow us to consider the experience of these systems as being
ergodic.
4 Beyond vicarious interactions
While interacting vicariously, a reader may be able to intuit or
understand the mechanical principles of a system, and to infer
causal relations. This happens because by observation of the system’s and the interactant’s behaviours, the reader may identify
regularities and patterns that lead her to expect speciic reactions
from both parties – from the artefact’s outputs to speciic actions
of the interactor, and from these to particular outputs from the
artefact. Although it may be questionable whether a true understanding of the artefact’s mechanics is ever attained through
vicarious interaction, or even through direct interaction when
direct access to the code is not allowed, we may expect that if the
outputs of the artefact exhibit regularities and its behaviours are
somewhat determinable (Carvalhais 2010, 363), the reader may be
able to develop a working model of the system that is capable of
producing useful predictions regarding its behaviours or those of
the pair interactant-system. This model may of course be based
on false assumptions, or on the adaptation of familiar behaviours
from other systems, but if it is demonstrably effective, it will also
prove useful to the reader, allowing her to approach completeness
in the experience of the system. As a result of vicarious interaction the reader may be able to peer through a system’s aesthetics
level and to develop hypotheses about dynamics and ultimately
about mechanics. What then happens if interaction is removed
from the experience?
When reading a dynamic and transient system with which one
is not able to interact, in order to achieve a comprehension of its
procedural level, and therefore of its behaviour, a reader needs
to interpret beyond semantics, surpassing the traditional scope
of the interpretative function. Besides the interpretation of text,
images, sound and other sensorial modalities, procedural systems
also allow for procedural interpretation. When interpreting texts,
readers are “integrating details, forming and developing hypothesis, modifying, conirming, and abandoning predictions” (Douglas 1994, 175), and much of this is likewise possible to do at the
procedural level.
144
When perceiving a system and following its outputs, a reader is
not capable of directly accessing the prescriptive rules at the level
of mechanics, but she is nevertheless able to make use of descriptive rules to create models that intend to explain or understand
the phenomenological levels of the experience. While registering
affordances on the artefact’s outputs, the reader gradually identiies patterns of behaviour – starting with possible behaviours
and following towards more likely or probable behaviours – and
identiies relations between the perceived system and other systems or artefacts in the world.6 Using the data thus gathered, the
reader is then able to start developing mental simulations of the
processes behind the surface units found in the artefact’s outputs.
The reader probes the level of mechanics, constructing hypotheses that are veriiable at the level of dynamics and allow to inetune the mental models.
These models do not need to be based on complete sets of data,
and they do not need to be rigorous to the point of generating
accurate predictions of the system’s behaviours.7 First and foremost, they need to pose testable hypothesis that can be veriied
with the system under observation or falsiied by new indings,
being then replaced by better hypothesis that ultimately contribute to a good working model of the system. This will then be grad6 Cf. with Metzinger: “Everything we
ually and continually developed by trial and error, by validation
perceive is automatically portrayed
as a factor in a possible interaction
and falsiication.
between ourselves and the world.”
In the gradual understanding of a complex process from which
(2009, 167).
the reader does not have but inferred clues, we may ind an analogue to the process of developing theories of mind of other
7 Being very used to interact with
humans or human-like entities.8 A theory of mind allows one to
macroscopic and gnarly systems in
picture “the world from another person’s vantage point” and to
everyday experience, readers are
accustomed to a certain level of
construct “a mental model of another person’s complex thoughts
analogue variation and noise in the
and intentions in order to predict and manipulate [their] behavior.”
expected outcomes of any system.
(Ramachandran 2011, loc. 2281) Based on known humans, familTherefore, a prediction does not
need to be exact, or totally accurate,
iar systems or mechanics, but also on other artefacts, and pheit simply needs to be roughly
nomena from the physical world, etc., humans speculate regardapproximated to be evaluated
ing mental processes, developing hypotheses that are conirmed
as valid.
or falsiied based on the witnessed actions.
Through the developed simulations, and still from the stance of
8 As V. S. Ramachandran (2011)
suggests, the capacity to develop
the reader, one tries to see the system from the designer’s point
theories of mind is not exclusive
of view, thus embracing its wholeness and fully understanding
to humans and not only developed
it. Interactive systems are “plastic objects” that need to be intertowards humans but also towards
entities or systems that may
acted with in order to be experienced and that pose the challenge
exhibit behaviours, emotions
of “extruding play and form, which are no longer located internal
or “mental states” comparable
to the subject, but have to be performed” (Kirkpatrick 2011, 6) in
to those witnessed in humans
(Zunshine 2006), with “many of us
order to be activated and to allow for an understanding of their
even project[ing] this onto objects.”
“true structure” (Kirkpatrick 2011, 8). On the other hand, non-in(Gazzaniga 2011, 158).
teractive systems, or systems in non-interactive states, do not
145
allow the user to investigate them directly through interaction,
but their mental simulations developed by the user are far more
plastic, versatile, and accessible. They allow for transformations,
variations, and for a larger space of possibilities to be explored
as a theory of the system is developed, a process during which one
is not engaged with the artefact’s diegesis or with a iction but
rather tries “to master its routines” (Kirkpatrick 2011, 8).
The process of validating the model can then be seen as leading the reader through an experience of traversal punctuated by
epiphanies – when hypotheses are conirmed – and aporias – when
hypotheses are disconirmed – which may lead to the development
of narrative (Aarseth 1997, 92) and even of drama9 in artefacts
that wouldn’t otherwise be experienced as narrative (Carvalhais
2012a; 2013). Furthermore, every epiphany will activate the
reward centres of the reader’s brain, resulting in a pleasurable
experience that will drive the enjoyment of the artefact and of
the experience of its simulation.
5 Ergodic contemplation
9 The building up of expectations
regarding a system and the violation
of those expectations by the system,
not only contributes to the validation
of the hypotheses or models, but also
builds meaning from disruption, as
Krome Barratt notes (1980, 301).
We may thus propose that non-interactive systems, or systems in
non-interactive states, regardless of the impossibility to develop
explorative or conigurative functions by the user, may also be
seen as ergodic. The mental exploration and reconiguration of
analogues – or simulations – of the systems can be seen as a de
facto ergodic experience, therefore procedural works are not limited to a classic interpretation because their variability, dynamism, and procedural nature allow for a new level of virtuosic
interpretation of the artefact, that while seemingly contemplative
is actually very active. As with other ergodic forms, procedural
artefacts require the development of a nontrivial effort from the
reader in order to ind not one but many paths along the traversal
of the procedural space of possibilities.
In the ergodic forms studied by Aarseth the reader is “constantly reminded of inaccessible strategies and paths not taken”
(1997, 3), with each decision making parts of the content more or
less accessible and building up uncertainty regarding the result of
one’s choices and to what may or may not be missed along the traversal. In procedural artefacts the questions posed by the reader
point towards how many and how diverse those paths may be, and
to a discovery of how the system – unaided by a user – tends to
follow them. As a result of ergodic contemplation one is then led
not to build up uncertainty but rather to increase information and
knowledge regarding the artefacts mechanics and to regard the
possibilities to be discovered at the dynamics and aesthetics levels.
146
If in other ergodic forms the reader faces the risk of rejection
(Aarseth 1997, 4), the reader of a procedural artefact has to deal
with the added risk of incomprehension, that is, of being unable to
build a working theory of the system that may lead to useful predictions. Naturally, with the exception of the very simplest of systems, a total understanding of the processes is not only unattainable as it is utopian, and the reader should be reconciled with that.
6 Designing for virtuosic interpretation
While developing procedural systems that intend to foster ergodic
interpretation, artists and designers should be aware that much of
this process of building models and testing hypotheses is developed unconsciously. A conscious procedural close reading is certainly possible but in most cases – with perhaps the exception of
game forms – should not be expected. One is then faced with the
question of how to communicate processes, of how to design processes that are communicable to and discoverable by the reader.
Code descriptions, procedural descriptions or even explicit code
may be presented either at or with the system. These may duly
inform the reader and allow for the easier elaboration of models
and predictions. An example of this approach may be found in John
F. Simon Jr.’s Every Icon, a work presented with the following text:
Given: A 32 × 32 Grid
Allowed: Any element of the grid to be black or white
Shown: Every Icon
(Simon 1997)
More recently, C.E.B. Reas has developed several works in his
Process series that are presented with textual descriptions of
the elements in the pieces from which dynamic compositions
emerge. Elements are “machines” composed by forms (as e.g.
“Circle”, “Line”) and one or more behaviours (such as “Move in
a straight line”, “Constrain to surface”, “Change direction while
touching another Element”, etc.). Each piece in the series is a process that “deines an environment for Elements and determines
how the relationships between the Elements are visualized” and
that is presented as “a short text that deines a space to explore
through multiple interpretations.” (Reas 2008) As examples, we
may present:
Process 18
A rectangular surface illed with instances of Element 5, each with a
different size and gray value. Draw a quadrilateral connecting the endpoints of each pair of Elements that are touching. Increase the opacity
147
of the quadrilateral while the Elements are touching and decrease while
they are not.
Process 17
A rectangular surface illed with instances of Element 5, each with
a different size and gray value. Draw a transparent circle at the midpoint of each Element. Increase a circle’s size and opacity while its
Element is touching another Element and decrease while it is not.
(Reas 2008)
10 “John Cage has used processes
and has certainly accepted their
results, but the processes he used
were compositional ones that could
not be heard when the piece was
performed. The process of using
the I Ching or imperfections in a
sheet of paper to determine musical
parameters can’t be heard when
listening to music composed that way.
The compositional processes and
the sounding music have no audible
connection. (…) What I’m interested
in is a compositional process and a
sounding music that are one and the
same thing.” (Reich, 1968).
11 “…deined as the susceptibility
of people to read far more
understanding than is warranted
into strings of symbols – especially
words – strung together by
computers. (…) We don’t confuse
what electric eyes do with genuine
vision. But when things get only
slightly more complicated, people get
far more confused – and very rapidly,
too.” (Hofstadter 1995, 158).
12 “…denotes the converse situation
[of the Eliza effect]. A very complex
programming process is reproduced
in such a simpliied form that the
complexity remains concealed from
the recipient. Wardrip-Fruin’s name
for this effect refers to a 1970s storygenerating computer program whose
highly complex algorithms could not
be discerned by the users.”
(Kwastek 2013, 135).
13 A phenomenon also known as
apophenia.
Finally, explicit code may be found in “program code poetry”
(Cramer 2001), of which the works in Pall Thayer’s Microcodes
(2009-2014) series are good examples:
Sleep
31. March 2009
#!/usr/bin/perl
sleep((8*60)*60);
(Thayer 2009)
If code or procedural descriptions are not presented to the
reader, processes may be designed with repetition and (some
amount) of regularity in mind. As an example, algorithmic processes that largely depend on pseudo-randomness may dissimulate their structure and processes under extremes of disorder that
are far off from a readable and understandable level of effective
complexity (Galanter 2003, 8; 2008; Lloyd 2006). A balance of repetition and novelty – to which randomness can certainly contribute (Leong et al. 2008) – can ease deduction, comprehension, and
the following of processes, as well as (to a certain extent) the participation of the reader in the processes.
Finally, and as Steve Reich notes in Music as a Gradual Process
(1968), perceptible and gradual processes facilitate the closely
detailed reading of a piece.10 Therefore, the pacing of the processes – and we must bear in mind that the timescales of modern
computational devices and of human psychology and perception
are very different – may also be instrumental in facilitating (or
altogether allowing) ergodic interpretation.
But the processes should also be developed taking into account
a series of perils or dificulties related to human interpretation
of procedural systems – both natural and artiicial – as e.g. being
aware of psychological and perceptual illusions such as the Eliza
effect11 (Hofstadter 1995, 158) and the Tale-Spin effect.12 The
mental processes supporting some of these illusions should also
be taken into account during development: patternicity,13 “the
tendency to ind meaningful patterns in both meaningful and
148
meaningless data” (Shermer 2011, 5) and agenticity, “the tendency
to infuse patterns with meaning, intention, and agency” (Shermer 2011, 5).
7 Summary & Future Work
The interpretative user function should be regarded as broader
and more relevant to the aesthetic experience than what one may
be led to believe from its usual association with non-ergodic texts.
Procedural interpretation may allow the development of rough
analogues of the explorative and conigurative functions, when
these are not present or possible in a given context, and lead to
the transfer of algorithmic processes between the artefact and
the reader and to the development of a virtuosic interpretation.
An awareness of these processes may thus lead creators to
develop artefacts that may rely on them or at least aesthetically
negotiate with them, so if from traditional aesthetics we move
to an aesthetics of interaction, agency and performance, we now
ind these also coupled with a very relevant aesthetics of process
and procedurality. This paper establishes the need for this awareness, enumerating some considerations for the design of the ergodic experience of virtuosic interpretation, while future research
aims at expanding and uncovering new considerations, developing them into a formal set of principles and guidelines.
Acknowledgements
This project was partially funded by FEDER through the
Operational Competitiveness Program – COMPETE – and by
national funds through the Foundation for Science and Technology – FCT – in the scope of project PEst-C/EAT/UI4057/2011
(FCOMP-Ol-0124-FEDER-D22700).
References
Aarseth, Espen J. Cybertext: Perspectives on Ergodic Literature. Baltimore,
MD: The Johns Hopkins University Press, 1997.
Barratt, Krome. Logic and Design: In Art, Science & Mathematics. Guilford,
CT: Design Books, 1980. 1989.
Bogost, Ian. Unit Operations: An Approach to Videogame Criticism.
Cambridge, MA: The MIT Press, 2006.
———. “The Rhetoric of Video Games.” In The Ecology of Games:
Connecting Youth, Games and Learning, edited by Katie Salen. 117-40.
Cambridge, MA: The MIT Press, 2008.
149
Carvalhais, Miguel. “Towards a Model for Artiicial Aesthetics:
Contributions to the Study of Creative Practices in Procedural and
Computational Systems.” PhD, Universidade do Porto, 2010.
———. “Artiicial Aesthetics as Tulpas: Regarding Narratives as
Thoughtforms.” In Avanca | Cinema, 903-07. Avanca, 2012a.
———. “Unfolding and Unwinding, a Perspective on Generative Narrative.”
In ISEA 2012 Albuquerque: Machine Wilderness, edited by Andrea Polli,
46-51. Albuquerque, NM, 2012b.
———. “Traversal Hermeneutics: The Emergence of Narrative in Ergodic
Media.” In xCoAx 2013, 51-60. Bergamo, 2013.
Cramer, Florian. “Program Code Poetry.” http://netzliteratur.net/cramer/
programm.htm.
Crawford, Chris. “Process Intensity.” In, Journal of Computer Game Design
1, no. 5 (1987). http://www.erasmatazz.com/library/the-journal-ofcomputer/jcgd-volume-1/process-intensity.html.
Douglas, J. Yellowlees. “How Do I Stop This Thing?: Closure and
Indeterminacy in Interactive Narratives.” In Hyper / Text / Theory,
edited by George P. Landow. 159-88. Baltimore, MD: The Johns Hopkins
University Press, 1994.
Galanter, Philip. “What Is Generative Art? Complexity Theory as a
Context for Art Theory.” In Generative Art. Milan, 2003.
———. “Complexism and the Role of Evolutionary Art.” In The Art of
Artiicial Evolution: A Handbook on Evolutionary Art and Music., edited by
Juan Romero and Penousal Machado. 311-32. Berlin: Springer, 2008.
Gazzaniga, Michael S. Who’s in Charge?: Free Will and the Science of the
Brain. New York, NY: Ecco, 2011.
Hofstadter, Douglas R. Fluid Concepts and Creative Analogies: Computer
Models of the Fundamental Mechanisms of Thought. [in English]. London:
Allen Lane, 1995.
Hunicke, Robert, Marc LeBlanc, and Robert Zubek. “MDA: A Formal
Approach to Game Design and Game Research.” In Challenges in Games
AI Workshop, Nineteenth National Conference of Artiicial Intelligence. San
Jose, CA, 2004.
Kirkpatrick, Graeme. Aesthetic Theory and the Video Game. Manchester:
Manchester University Press, 2011.
Kwastek, Katja. Aesthetics of Interaction in Digital Art. Translated by
Niamh Warde. Cambridge, MA: The MIT Press, 2013. Kindle ebook.
Leong, Tuck, Steve Howard, and Frank Vetere. “Take a Chance on Me:
Using Randomness for the Design of Digital Devices.” Interactions 15,
no. 3 (2008): 16-19.
Levin, Golan. “The Manual Input Workstation: Documentary Collection.”
edited by Katja Kwastek. La Fondation Daniel Langlois, 2010.
Lloyd, Seth. Programming the Universe: A Quantum Computer Scientist
Takes on the Cosmos London: Jonathan Cape, 2006.
Metzinger, Thomas. The Ego Tunnel: The Science of the Mind and the Myth
of the Self. New York: Basic Books, 2009.
150
Montfort, Nick. Twisty Little Passages: An Approach to Interactive Fiction.
Cambridge, MA: The MIT Press, 2003.
Murray, Janet H. Hamlet on the Holodeck: The Future of Narrative in
Cyberspace. Cambridge, MA: The MIT Press, 1997.
Ramachandran, V.S. The Tell-Tale Brain: A Neuroscientist’s Quest for What
Makes Us Human. New York, NY: W. W. Norton & Company, 2011.
Reas, C. E. B. “Process Compendium.” http://reas.com/compendium_text/.
Reich, Steve. “Music as a Gradual Process.” In Writings About Music,
1965–2000, edited by Paul Hillier. 9-11. Oxford: Oxford University Press,
1968.
Ribas, Luísa. “Sound-Image Relations and Dynamics in Digital
Interactive Systems.” In Artech 2012, 6th International Conference on
Digital Arts. Algarve, 2012.
———. “On Performativity as a Perspective on Audiovisual Systems as
Aesthetic Artifacts.” In INTER-FACE : International Conference on Live
Interfaces 2014. Lisbon, 2014a.
———. “Performativity as a Perspective on Sound-Image Relations and
Audiovisuality.” In Mono #2: Cochlear Poetics: Writings on Music and
Sound Arts, edited by Miguel Carvalhais and Pedro Tudela. 29-50. Porto:
i2ADS, 2014b.
Shermer, Michael. The Believing Brain: From Ghosts and Gods to Politics
and Conspiracies – How We Construct Beliefs and Reinforce Them as
Truths. New York, NY: Times Books, 2011.
Simon, John F., Jr. “Every Icon.” Parachute, 1997.
Thayer, Pall. Sleep. 2009.
———. “Microcodes.” http://pallthayer.dyndns.org/microcodes/.
Zielinski, Siegfried. Deep Time of the Media: Toward an Archaeology of
Hearing and Seeing by Technical Means. Translated by Gloria Custance. 1
vols Cambridge, MA: The MIT Press, 2006. 2002.
Zunshine, Lisa. Why We Read Fiction: Theory of Mind and the Novel.
Columbus, OH: Ohio State University Press, 2006. Kindle. 2012.
Videogame Art and the
Legitimation of Videogames
by the Art World
xCoAx 2015
Computation
Communication
Aesthetics
and X
Glasgow
Scotland
2015.xCoAx.org
Soia Romualdo
Independent researcher, Porto, Portugal
[email protected]
Keywords: videogames, art, art world, legitimation
The legitimation process of a new medium as an accepted form
of art is often accelerated by its adaptation by acclaimed artists.
Examining the process of acceptance of popular culture, such as
cinema and comic books, into the art world, we can trace historical parallels between these media and videogames. In recent years,
videogames have been included in exhibitions at specialty museums or as design objects, but are conspicuously absent from traditional art museums. Artists such as Cory Arcangel, Anne-Marie
Schleiner and Feng Mengbo explore the characteristics of videogames in their practices, modding and adapting the medium and
its culture to their needs, creating what is often called Videogame
art, which is widely exhibited in art museums but often criticised
within the videogames community. This paper aims to give a perspective of Videogame art, and explore its role in the legitimation
process of the videogame medium by the art world.
152
1 Introduction
The assimilation of a new medium into the art world has, traditionally, been a matter of contention throughout the history of
art. Media such as photography, ilm, television, street art and
comic books struggled to be recognized and respected for several
years after their creation, but were eventually accepted into the
network comprised of galleries, museums, biennials, festivals,
auctions, critics, curators, conservators, and dealers, deined thus
by art historian Robert Atkins:
The art world is a professional realm – or subculture in anthropological language – akin to those signiied by the terms Hollywood or Wall Street. (…)
quantitatively, it is the sum of the individuals and institutions who belong
to the global network dedicated to the production, distribution, and display of art and information about it; qualitatively, it is the set of customs
and habits shared by those individuals and institutions. (Atkins 2013)
1 For opposing views, presented at
the same event, compare curator
Christiane Paul’s presentation,
“Image Games”, with Tale of Tales’
“Over Games,” at the Art History
of Games Symposium, Atlanta,
Georgia, USA, February 2010.
What media such as ilm, photography, and comic books have
in common is that they are forms of popular culture that have
appeared fairly recently in the history of humankind, and which
used to be described as having no intrinsic value beyond technological, commercial or anthropological interest. Film, television,
and comic books, in particular, were accused of not only having
little cultural value, but also of being vehicles for the promotion
of violence and deviant behaviour, and therefore, critics afirmed,
they could not be art (Jansson 2012). Similar arguments have
been used towards videogames for the past four decades. The
debate as to whether or not videogames can be art has become
commonplace in industry events and publications, with compelling arguments from both sides.1 But what about the medium’s
acceptance by the art world? Is it helped, or hindered, by the use
of the videogame medium and surrounding culture by traditional
artists, to create Videogame art?2
Videogame art is commonly derided by those in the videogames
industry as gimmicky and implicative of a notion of inferiority to
actual videogames. Matteo Bittanti quotes scholar Henry Jenkins
in support of this idea:
‘A few of those critics have been prepared to defend videogames as art
when they are created by artists already recognized for their accomplishments in other media (…). As these works take their place in the Whitney
2 For the sake of consistency, this
paper will use capitalization to
indicate Videogame art as an artistic
movement.
Biennial, the curators are not so much conceding that videogames are
art as they are proclaiming that “even videogames can be used to make
art in the hands of real artists”. Of course, the fact that highbrow artists are starting to tap game-like interfaces speaks to the impact this
153
medium has on our visual culture. But if games are going to be thought
of as art, let it be because of what Shigeru Miyamoto (Super Mario Brothers) does again and again and not because of what some pedigreed artist does once in a lark. Calling videogames art matters because it helps
expand our notion of art and not because it allows curators to colonize
some new space’. (Bittanti 2009)
Nevertheless, the adaptation of a relatively new medium by artists can help to accelerate that medium’s acceptance by the art
world. Evidence of this idea can be found by looking at the history
of other popular media.
2 Other Media
As much as the contemporary art world is fascinated by the new,
there is still a certain resistance to any medium that challenges
previously held conventions, particularly if that medium is popular, a term usually associated with “lower” forms of culture (Tavinor 2009). However, artists themselves do not exhibit this conservatism, and are often quick to adopt new media to their purposes.
The deinition of art is a complex issue, one that goes beyond
the scope of this paper. The process of legitimation of a new
medium as a respected art form, on the other hand, is often a
practical question of academic, cultural, and political acceptance
that occurs externally to the medium or object itself. With most
new media, this process occurs in several overlapping stages.
First, the medium is created and developed by pioneers, who
are usually inspired by previously established media and apply
those media’s rules and conventions to the new medium. (Alexander 2006) Early photographers, such as Julia Margaret Cameron or
Gustave Le Gray, adapted motifs from painting, while early ilmmakers, such as George Méliès, tended to use theatre conventions.
This is followed by the widespread acceptance and commercialization of the new medium. If it’s a technology-based medium,
continuous experiments are made, leading to developments
which allow for more sophisticated expressions, and which make
the medium more accessible, either by making it easier to work
with or by lowering the technological entry point. In photography, this included the appearance of portable cameras, like Kodak,
Polaroid and instant photography, and more recently, digital photography, while in cinema, the adaptation of microphones led to
the appearance of talkies, among other developments:
Technology does not inherently improve a medium, though it can have
this effect by allowing a wider, more expressive, vocabulary of techniques to develop. A useful comparison can be made here with cinema.
154
Improvements in ilm technology have not directly helped cinema to
become a more expressive medium, but they have had a positive inluence through allowing newer, more expressive, shooting techniques to
emerge. (Mitchell 2003)
As a medium enters the public consciousness and becomes more
pervasive, more artists are inspired by it and incorporating it into
their practices. Artists have traditionally been at the forefront of
new media usage and appropriation, having “always borrowed and
used elements, symbols and characters both from art history and
popular culture” (Jansson 2012). Comic books became a source of
inspiration for artists since the 1960s. These artworks were distinguished from traditional comic books, but helped pave the way
for the acceptance of comic books into cultural institutions:
The relationship between comics and modern art has a long and tangled history. (…) Some twentieth-century artists, such as Stuart Davis,
Andy Warhol, and Roy Lichtenstein, utilized comic strips as the subject
of their paintings. Others – including Öyvind Fahlström, Richard Hamilton, Jasper Johns, Joan Miró, and Kurt Schwitters – incorporated images
or actual snippets of comic strips in their paintings and collages. (…) But
in all these cases, the distinctions between the inexpensive, mass-produced comic strip and the one-of-a-kind artwork remained clear. This
distinction began to blur in the 1970s with the increasing interest in
non-art drawings; (…) comics began to be exhibited and collected, often
in galleries or museums devoted to a particular discipline (such as the
Cartoon Art Museum in San Francisco). About the same time, art galleries also began to show the original, politically pointed drawings by
so-called underground comics artists such as Bill Grifith, S. Clay Wilson,
and Robert Crumb. (Atkins 2013)
Likewise, many artists who were inspired by television gave
rise to Video art, which is now widely regarded as one of the most
important art movements of the twentieth-century. Korean-born
Fluxus artist Nam June Paik is credited with having made the irst
pieces of Video Art (Atkins 2013), while other artists, such as Dara
Birnbaum, made works inspired by television (Jansson 2012). Artists such as Lucas Samaras adopted the use of photography in their
practices, while Cindy Sherman played with both the photographic
medium and the conventions of cinema. Sherman is part of the
“Pictures Generation”, whose members were “linked by their attraction to photography’s mechanically reproduced image, which they
put to distinctly “unphotographic” purposes.” (Atkins 2013).
As the medium matures, we see the rise of auteurs, individuals
or groups of individuals who explore the medium’s potential, by
discovering what is unique to it and using that in their practices. It
155
often happens that this stage is associated with the rise of single,
widely recognized authors, as opposed to the diffusion of authorship commonly seen in collaborative media. In the case of comic
books, the appearance of the graphic novel – comics in book form,
as opposed to periodicals – and authors such as Art Spiegelman
and Will Eisner signalled the maturation of the medium as art.
In ilm, directors such as Ingrid Bergman, Francis Ford Coppola,
Federico Fellini and Stanley Kubrick are considered accomplished
artists, their works often described as masterpieces.
Several authors have described a similar approach to this process of legitimation as applied to several media, including Janet
Murray, in her text Hamlet on the Holodeck (1997), as described by
Bryan Alexander:
Every new technology-based medium, [Murray] argues, evolves in two
early stages. The irst sees the porting over of forms from other media,
as when early movies relied upon theatrical conventions. During the second stage creators pick up on the intrinsic elements of a new medium,
and create new forms. In cinematic history, we can consider Grifith’s
innovation of moving the camera while ilming, or Dziga Vertov’s use
of editing to break up ilmic time and space. A similar process is visible
across the history of digital media. (Alexander 2006)
Media like ilm, photography, and comic books were gradually
accepted into the art world, and once that happened, the media’s
auteurs started being referred to not just as photographers, ilmmakers or comic book makers – they were simply artists.
It can be argued that, when it comes to the videogame medium,
several of the stages previously described have already happened.
Early videogame designers and programmers contributed to the
medium in its irst stages, which were followed by its widespread
commercialization. Technological developments led to the appearance of ever more complex gaming platforms, which developers
pushed into their limits through their creations. As the popularity of videogames grew, artists started adapting them into their
practices. Concurrently, in a phenomenon that started to accelerate in the latter half of the irst 2000s decade, the medium saw the
appearance of its auteurs, commonly referred to as indie videogame developers, with smaller or one-person teams and intentions
that went beyond simple commercialization. However, the medium’s maturation has not been immediately followed by its legitimation as an art form. Videogames have been largely exhibited
in festivals and specialty museums, but are conspicuously absent
from traditional contemporary art museums and events. On the
other hand, Videogame art has become widely accepted by the
art world. Starting in the 1970s, the mingling of popular and high
156
culture forced artistic institutions to re-evaluate and expand the
concept of what could be exhibited as art, and Videogame art may
help to accelerate that process when it comes to videogames. But
how can we deine Videogame art?
3 Videogame art as Media art
Videogame art is sometimes considered a subset of Computer art,
and more commonly a subcategory of Media art. These designations can be confusing and overlapping; as Robert Atkins says
in relation to Computer art, “the use of digital tools in art – and
other realms of contemporary life – is so widespread that it has
undermined the ability of computer (or digital) to describe something distinctive about an artwork” (Atkins 2013). Likewise, the
term New Media can be described as “a blanket term that once
referred exclusively to the genre of art produced by mechanical
reproduction in media more recently invented than photography
(…). New media has acquired a second, more widespread, non-art
meaning, referring to all forms of digital mass media, in contrast
to “old media” such as print newspapers or magazines” (Atkins
2013). It is perhaps more accurate to say that the term Videogame
art originated from the twentieth-century tradition of adding the
word art to a pre-existing medium to signal its appropriation by
artists, as was the case with Video art, Mail art, Sound art, among
others. Videogame art can also be described as a part of a broader
concept of Playable art, which also includes art games (usually
indie) and those commercial videogames that can be considered
to go beyond simple entertainment.
Videogame art can be the use of videogames in a different way
than the one for which they were primarily designed, or it can be
the appropriation, remediation, modiication or emulation of videogames, their language and surrounding culture, into an artist’s
practice. The result can be anything from mods, to machinima,
installations, videos and performances. However, this classiication is problematic, and a source of contention between critics.
There is the question of whether or not to classify as Videogame
art artefacts such as paintings and sculptures that draw heavy
inspiration from videogames aesthetics or culture. Matteo Bittanti includes those artefacts in his deinition:
Game art is any art in which digital games played a signiicant role in
the creation, production, and/or display of the artwork (The reason why
I’m mentioning this is because my own deinition of Game Art is broader
than the ones formulated by many other critics, as it encompasses traditional artefacts such as painting, sculpture, and photography, and
not only digital works.) (…) The resulting artwork can exist as a game,
157
painting, photograph, sound, animation, video, performance, or gallery
installation. (Bittanti 2009)
However, Bittanti singles out art games, a term often used to
describe games made by indie authors with a speciic artistic
intent, as being left out of the discussion: “Although art games
may be considered an expression of Game art, we decided – for a
variety of reasons – not to include them (…)” (Bittanti 2000). Traditionally, art games have not been included in the category of
Videogame art. However, games that are made from scratch by
artists, such as The Night Journey (2007), an experimental videogame made by Bill Viola, a celebrated and widely exhibited video
artist, raise further questions. On the surface, The Night Journey
is similar to many so-called art games, so does it need a different classiication because it was made by a traditional artist, as
opposed to being made by a game developer?
This question can be illustrated by looking at the example of
the Museum of Modern Art (MoMA) in New York. The MoMA
is notorious for being one of the irst museums in the world to
include videogames, both commercial – such as Tetris (1984), The
Sims (2000) and EVE Online (2003) - and indie – lOw (2006) and
Passage (2008) - in its collection (Heddaya 2013). However, those
videogames are in the collection of the Department of Architecture and Design. In contrast, Chinese artist Feng Mengbo’s work
Long March: Restart (2008), described by Mathias Jansson as “a
large-scale interactive video-game installation” (Jansson 2012), is
listed in the Department of Media and Performance Art, classiied
as an installation.3 What makes this object art, and the other ones
design or applied art? A subtle difference can be spotted in the
medium’s description, with the traditional videogames described
as “video game software” (and therefore, one could suppose, standard), as opposed to Long March: Restart, which is described as
“video game (color, sound), custom computer software, wireless
3 See Feng Mengbo’s Long March:
Restart (2008) in MoMA’s collection:
game controller”.4
http://www.moma.org/collection/
Similarly, another source of contention is the work of cross-disbrowse_results.php?object_id=122872,
ciplinary artist Toshio Iwai, who beyond working with installacompared to, for example, Rand and
Robyn Miller’s Myst (1993): http://
tions, music (he created Tenori-on, a handheld digital musical
www.moma.org/collection/browse_
instrument that became a part of MoMA’s Design collection in
results.php?object_id=164918.
2009), and television, has also created commercial videogames,
such as Electroplankton (2005) for the Nintendo DS. The luidity of
4 The exception to which is Jason
the categories can be understood from Grethe Mitchell and Andy
Rohrer’s Passage (2007), described
Clarke’s view when discussing Videogame art:
as “SDL, GNU Compiler Collection,
GNU Emacs, mtPaint, CVS, and
MinGW-MSYS software”: http://www.
moma.org/collection/browse_results.
php?object_id=145533.
We wish to exclude from our discussion the work of artists such as Toshio
Iwai, whose interest is in the creation of wholly original videogames for
use within a gallery setting. (…) We are not excluding all gallery-based
158
work from our discussion, but wish to make a distinction between the
work of artists such as Iwai, which is typically described using terms
such as “audio-visual installation” rather than videogame, and that of
groups such as Blast Theory, where the relationship with the world of
games and videogames is explicit, acknowledged, and intrinsic to the
work. (Mitchell 2003)
The authors go on to propose grouping videogame artworks
under the categories of remixing, reference, reworking, and reaction (Mitchell 2003). This categorization, while useful, is insuficient for the purposes of this paper, which will follow an approach
closer to Matteo Bittanti and Domenico Quaranta’s. As such, Videogame art is art created by traditional artists that appropriates
the technology, language, content and culture of videogames to
produce artefacts such as machinima, patches, mods, paintings,
photographs, sculptures, performances, video, animation, games,
interventions and installations, as well as works that challenge
easy categorization.
The following section is a short selection of artists working
within Videogame art, in order to highlight diverse developments
in the movement. Due to space constraints, this selection is
meant to be illustrative, not exhaustive, and is necessarily missing important artists in the ield.5
4 Videogame Artists
5 For example, Harun Farocki, Jon
Haddock, Tabor Robak, and the art
collective JODI, to name only a few.
Chinese artist Feng Mengbo was among the irst to appropriate
videogame aesthetics into his practice. Mengbo’s work is described
as belonging to Political Pop, a term coined by critic Li Xianting in
1992 to describe an artistic movement “derived from Western Pop
art’s visually arresting depictions of everyday subjects in styles
borrowed from comics and advertising” (Atkins 2013). Mengbo
irst linked political sensibilities with popular entertainment in
The Video Endgame Series (1993), “a series of acrylic-on-canvas
paintings in which he mixed images from the Cultural Revolution (1966-1976) with his childhood memories of playing 8-bit
video games” (Jansson 2012). In 1994, he created Game Over: Long
March, a series of 42 paintings that closely resemble screenshots
from early videogames. In 1997, Mengbo created the interactive
CD-Rom Taking Mount Doom by Strategy, “an interactive gaming
platform that blends the idealized Cultural Revolution-era opera
Taking Tiger Mountain by Strategy and the violent Western video
game Doom” (Atkins 2013). He modiied Quake (1996) to create
several works such as the machinima-based Q3 (1999), and the
playable mods Q4U (2002) and Ah_Q (2004), from which he also
created several photographs and paintings derived from screen
159
captures (Krischer 2009). More recently, he created Long March:
Restart (2009), which is played on two large, opposing screens,
forcing the player to turn around and face another screen to
advance through the game’s levels. Writer Carolina A. Miranda
describes the game:
Visually it is a paean to classic games of the 1980s such as Super Mario
Bros. and Street Fighter, but its narrative is largely focused on 20 th-century Chinese history, speciically the Long March, the Communist
Army’s gruelling 6,000-mile retreat from the more powerful Nationalist
Army in the mid-1930s. In Mengbo’s game, the player guides an avatar,
a blue-suited member of Mao Zedong’s Red Guard, through the various
stages of the Long March – all while pelting an array of intergalactic
enemy villains with cans of Coca-Cola. (Miranda 2011)
Feng Mengbo exhibited The Video Endgame Series (1993) at the
45 Venice Biennale. Since then, his videogame-based works have
been exhibited extensively all over the world, from the Dia Center
for the Arts in New York, to the Ullens Center for Contemporary
Art in Beijing. In 2002, Q4U (2002) was included in Documenta 11.
Following the acquisition of Long March: Restart (2009) to MoMA’s
collection, the piece was exhibited at MoMA PS1 (Jansson 2012).
Greek painter and multimedia artist Miltos Manetas, famous for
creating Internet artworks such as JacksonPollock.org (2003) and
WhitneyBiennial.com (2002), was, together with Mengbo, among
the irst to use videogames iconography in his work. His series
Videos after Video Game (1996-2006) - which included Flames
(1997), a video in which Tomb Raider’s Lara Croft is killed by poisonous arrows, and Super Mario Sleeping (1997) - is considered the
irst example of machinima (Bittanti 2010). He has created several
paintings based on videogames culture, and was among the irst
artists to depict the act of gaming:
th
Painted in 1997, the piece Christine with Playstation evokes another leeting moment of style. Angled from above, the painting surveys a domestic
scene, the eponymous girl or woman kneels on the loor in front of the
television, leaning forward and resting her elbows on a large loor cushion, and holding what is clearly a game controller. (Apperley 2013)
Among the irst artists to create mods based in videogames was
Orhan Kipcak, who in 1995 created ArsDoom with Reini Urban:
Using the Doom II engine and Autodesk’s AutoCAD software, Kipcak
and Urban created a virtual copy of the Brucknerhaus’ [the venue for
Ars Electronica Festival in Linz] exhibition hall and invited artists to
create or submit virtual artworks that could be displayed in the new map.
160
Armed with a shooting cross, a chainsaw or a brush the player could kill
the artists and destroy all the artworks on display. (Jansson 2009)
A similar approach was used by Swedish artists Tobias Bernstrup
and Palle Torsson, who in 1996 started modifying existing videogames such as Duke Nukem 3D (1996) and Half-Life (1998) based on
reconstructions of art museums such as the Arken Museum of Modern Art in Copenhagen and Moderna Museet in Stockholm. The
result, Museum Meltdown (1996-1999) allows players to “run around
the museum, shoot monsters, and destroy art” (Jansson 2012).
Another example of a videogame art mod based on Half-Life is
Adam Killer (1999) by Berlin-based artist Brody Condon. In this
piece, the player is confronted with multiple replicas of the whiteclad igure of “Adam”, standing passively in a room; the player can
either do nothing or kill the Adams. Condon exploited a glitch in
the game in order to create trailing textures and effects (Gavin
2014). Suicide Solution (2004) is a DVD documentation of characters committing suicide in over ifty commercial shooter games.6
Condon has also experimented with intervention in multiplayer
online games with Gunship Ready (2001):
Designed as a modiication of the online game Tribes, this work provides
a lying gunship within the world of the game. The players are beckoned
by the artist to climb onto this vehicle, but when they do, they ind that
they are taken on a tour around and eventually away from the battleground. They have been kidnapped (by the artist), rather than, as they
thought, being taken to more exciting battle. Having been abducted,
they are presented with the situation where they must kill themselves
(in the game) in order to re-enter the action. (Mitchell 2003)
From the 2000s, artists continued staging performances and
interventions within videogame spaces. During the 2004 Republican National Convention in New York, in a work titled Operation Urban Terrain (OUT): A Live Action Wireless Gaming Urban
Intervention, Anne-Marie Schleiner “armed herself with a mobile
Internet connection, a bicycle, a battery-powered video projector,
a team of players and technicians, and a laptop” (Flanagan 2009),
entered the videogame America’s Army (2002) and discussed antiwar ideas with the players, projecting the live game session into
the urban space. Velvet-Strike (2003) is a downloadable collection
of spray paints for the walls of Counter-Strike (2000) by Schleiner,
Brody Condon, and Joan Leandre:
A player, having installed Velvet-Strike, enters a usual online shooter
6 See Brody Condon’s website:
http://tmpspace.com/
game and is able to spray clearly seeable messages to other players on her
surroundings. The sprays one can download from the project’s web site
161
range from textual anti-war messages (“If god says to you to kill people /
kill god”) over rendered posters of soldiers in intimate poses to grafitiesque
depictions of teddy bears shooting “love bubbles”. (Pichlmair 2006)
7 Also known as
0100101110101101.ORG.
8 See the work on Antoinette
J. Citizen’s website:
http://antoinettejcitizen.com/
installation/landscape/
9 See Aram Bartholl’s website:
http://datenform.de/
Joseph DeLappe’s Dead-in-iraq (2006-2011) is another intervention into America’s Army (2002), the U.S. army recruiting game.
Starting in 2006, DeLappe entered the game and proceeded to
type (saying to the other players) the names of American soldiers
who died in the Iraq war. Inevitably, the other players killed him;
he re-incarnated and continued to type. He did so until December 2011, “the announced withdrawal date of the last U.S. troops
in Iraq. Delappe had entered a total of 4484 names in the game”
(Jansson 2012).
Artistic interventions within game spaces can be considered
performance artworks. This performativity was explored by artists such as Gazira Babeli, a performance artist who exists in the
virtual world of Second Life (2003). Within the game, she manipulates the virtual world in order to create prints, performances
and movies, such as Gaz of the Desert (2007), which can then be
exhibited in the physical world.
In 2007, Eva and Franco Mattes7 began to re-enact famous performance pieces from the history of art within Second Life. One of
their chosen performances was Vito Acconci’s Seedbed (1972) and
they also conducted other performances by Chris Burden, Marina
Abramovic, and Gilbert & George.
Several artists have created artworks that allow game spaces
to intervene in the physical world. Antoinette J. Citizen created
Landscape (2008), an installation that transformed a gallery room
into a Super Mario level, complete with interactive boxes with
questions marks and bricks that produced sounds.8 Berlin-based
artist Aram Bartholl has created several works which allow videogames to invade the real world, such as WoW (2006-2009), an
intervention in which participants construct their own names
out of green cardboard and walk around with them hovering over
their heads, as if they were avatars in World of Warcraft (2004). He
has also brought to life a game level in Dust – Winter Prison (2013),
a large-scale installation at an old prison yard in Quebec, inspired
by the map Dust from Counter-Strike (2000).9
Several artists have experimented with the physicality of videogames in other ways. The artist duo //////////fur//// (Voker Morawe
and TilmanReiff) created PainStation (2001), a two-person gaming console based on the game Pong (1972) which punishes losing players with physical pain on their hands, in the form of heat,
electric shots or a whiplash (Jansson 2012). The physical body of
the player is also implicated in multimedia artist Eddo Stern’s
work. His project Darkgame 2 (2007/2008) is a sensory deprivation
162
videogame that dynamically separates the player from the avatar
on screen through the use of a head device: as the player loses
his or her physical sensory abilities, the character becomes stronger in the game. Together with Mark Allen, Stern did the performance Tekken Torture Tournament (2001), in which “the participating players were equipped with special bracelets. When the
player was hit by the other player on the screen, he got an electric
shock in the arm. (…) The bracelet was a form of interface that
could connect the virtual pain with the player’s physical body and
transfer the virtual violence into the real world” (Jansson 2012).
Similarly, Riley Hammond’s installation What it is without the
hand that wields it (2008) attempts to turn the virtual experience
of videogames into a physical one:
The installation was an electronic sculpture attached to a server where
people played Counterstrike. The sculpture consisted of a number of
blood bags with tubing that was connected to nozzles that were opened
when one of the players on the server was shot. The virtual killing and
violence ran, so to speak, over to reality, the virtual blood in the game
was solidiied on the gallery loor. (Jansson 2012)
Perhaps the best-known practitioner of Videogame Art is Cory
Arcangel, a multimedia, post-conceptual artist who “collects outmoded computer games, decrepit turntables and similar castoffs
(…). Through a bit of ingenious meddling, he reboots this detritus to produce witty, and touchingly homemade, video and art
installations” (Spears 2011). His earliest piece of Videogame Art
is Super Mario Clouds (2008), a NES Super Mario Bros (1985) cartridge that he hacked to erase everything except for the clouds,
effectively rendering the game unplayable. This was followed by
other pieces based on modiied code, such as I Shot Andy Warhol
(2002), a Hogan’s Alley (1984) mod in which the gangsters have
replaced by artist Andy Warhol, while the innocents have been
replaced by the Pope, Flavor Flav and Col Sanders, and F1 Racer
Mod (aka Japanese Driving Game (2004), a mod of the Famicom
game F-1 Race (1984), from which he erased the cars and left only
the road and the scrolling landscape. The 15-minute movie Super
Mario Movie (2005) was produced in collaboration with artist collective Paper Rad:
(…) our protagonist is thrown into a world neither he nor we can comprehend. The rules of the game universe are turned upside down, colors
shift, Mario loats on air. The game’s text becomes nonsense and the
screen is at times overtaken by vaguely familiar symbols and abstract
patterns. Through this all, Mario wanders. (Chayka 2011)
163
Arcangel has also produced Various Self Playing Bowling Games
(2011), a series of large-scale projections of bowling games from
the late 1970s to the 2000s, to which he added several modded
game controllers so that the characters on screen would throw
only straight gutter balls. Similarly, Self Playing Nintendo 64
NBA Courtside 2 (2011) is a mod in which the characters are programmed to miss their shots continually via a modded controller.
Cory Arcangel’s work has been exhibited both in solo and group
exhibitions in places such as the Whitney Museum of American
Art, New York, the Barbican Centre, London, and at the Warhol
Museum, Pittsburgh, among others.
From this selection of artists working with Videogame Art, it
is possible to identify a few trends. The irst artworks to appear
were mainly traditional artefacts, such as paintings, photographs
and videos, which either referenced videogames or were directly
appropriated from the games, through screenshots and machinima. Art mods have tended to favour older games, perhaps because
they are easier to hack and modify than more recent ones (the
same goes for the hardware). As videogames became more complex, artists started staging performative interventions within
the game space, often interacting with other, regular players. And
as the medium matured, artists started to explore the potential
physicality of videogames, effectively blurring the boundaries
between virtual and physical space.
Concurrent to these developments, the mid-2000s saw the rise
of indie videogame developers and the widespread appearance of
art games. Some artists, such as the afore-mentioned Bill Viola,
collaborated with videogame developers in order to bring their
vision into reality. We can perhaps detect a subtle convergence
between Videogame art and more traditional videogames, leading
us to speculate that, as the technologies that allowed the creation
of videogames became more sophisticated and accessible, and as
artists became more literate in the medium, they no longer felt
the need to create mods, and instead started to enter the world of
designing and developing games, effectively becoming the medium’s auteurs.
5 Videogames and the Art World
In the last ifteen years or so, Videogame art has been exhibited
in art museums alongside more traditional artefacts. Some examples of such exhibitions, beyond those that were already mentioned, are “Game Show” (2002) at Mass MoCA, Massachussets,
“re:Play” (2003) at the Institute for Contemporary Art in Cape
Town, “Killer Instinct” (2003) at the New Museum in New York,
“Bang the Machine: Computer Gaming Art and Artifacts” (2004)
164
at the Yerba Buena Center for the Arts, San Francisco, and “Space
Invaders” (2011) produced collaboratively by FACT (Foundation
for Art and Creative Technology, Liverpool, the Nikolaj Copenhagen Contemporary Art Centre, and the Netherlands Media Art
Institute in Amsterdam. Art games are also often included in
these exhibitions.
Christiane Paul, Adjunct Curator of New Media Arts at the Whitney Museum of American Art, has been at the forefront of exhibiting both Videogame art and art games, both at the museum, the
biennial and artport, the Whitney Museum’s website dedicated
to New media art. Works by Cory Arcangel and the Velvet-Strike
mod have been included at the 2004 Whitney Biennial. Another
institution that recognized early on the importance of Videogame
art was the Laboral Centro de Arte y Créacion Industrial in Gijon,
Spain, with exhibitions such as “Gameworld”, “Playware” (2008)
and “Homo Ludens Ludens”.
Traditional videogames, on the other hand, are mostly exhibited
in museums as design objects, with most exhibitions that defend
videogames as art taking place in specialty galleries, museums
and events dedicated to design, art and technology. This is not
necessarily negative: many of these museums are exploring the
cutting edge of art, and actively pushing the boundaries of what
is considered art and what is not. However, it is extremely rare
to see them exhibited in contemporary art museums, alongside
sculptures, paintings, performances and installations. Mostly,
these museums use them as support materials by their educational department.
Resistance to the idea of videogames as art stems in large part
from the fact that they can be considered popular entertainment,
and therefore too simplistic to be considered art. Traditionally,
in Western culture, certain media – painting, sculpture, or literature - have been considered inherently better and more digniied
than others – notably, television, comic books, and games. The Pop
art movement questioned the boundaries between high and low
culture, and the distinction has become increasingly blurry, with
more and more areas of human production becoming recognized as
artistic (Atkins 2013). Ultimately, once a medium has matured, the
process of legitimization is driven by forces external to the objects
produced. Certain videogames exhibit conditions, like rules, objectives, and competition, which seem to be outside a traditional conception of the arts; however, they also exhibit many characteristics
that approximate them to other artistic media, such as “aesthetic
pleasure, stylistic richness, emotional saturation, imaginative
involvement, criticism, virtuosity, representation, and even special focus and institutional aspects” (Perron 2009).
165
Beyond appropriation by traditional artists and the subsequent
assimilation by the art world, a medium’s acceptance is also inluenced by the appearance of criticism and academic studies. With
the proliferation of game studies and a more established tradition
of serious criticism of videogames, there is still a need for extensive work when it comes to exhibiting, collecting, and contextually framing videogames, in order for them to achieve the status
of an art form.
6 Conclusion
10 The importance of the public’s
opinion on the artistic status of a
given medium can be illustrated by
looking at the history of comic books
and their censorship.
Beyond the highly debated question of whether videogames
can be art or not, there is the question of Videogame art and its
acceptance into art museums, and whether it helps or hinders
the medium of videogames. Those who defend videogames as art
often criticize the inferiority that the designation Videogame art
(and, for that matter, art games) implies. The purpose of this paper
has been to argue that, as it happened with the legitimation of
other media, the movement of artists adapting videogames into
their work helps to accelerate the process of acceptance of videogames as artistic objects in their own right. When introduced
into galleries, museums, and biennials, audiences are exposed to
videogames in a different context than the one they are used to,
and artists, curators, and critics are encouraged to think about
the medium critically.
Even if some game developers disagree with their work being
considered art, there are many others who consciously afirm their
intention of creating art. While the legitimation of a medium as
art is not necessary for the development of that medium, it can
have considerable value: if something is considered art, then it
is more easily protected by creativity and free speech laws (Jenkins 2005). It can also have an impact on how the public sees videogames.10 In addition, from a museological point of view, videogames put into question traditional modes of exhibition and
archival. Videogames’ technological demands, the fact that they
transform the audience from viewers to players, and the issue of
placing them in exhibition spaces largely unprepared to receive
them, are among the speciic questions that cultural institutions
and professionals must acknowledge. Moreover, archiving videogames implies more than just preserving the software: historians need to keep in mind, among other things, the hardware used
and the context in which videogames appeared, as well as support
materials. Several organizations and art historians have already
started to address the problems of game preservation, and the
acceptance of videogames can only add to this process.
166
Videogames already have the potential to become one of the
most important art forms of this century. Their appropriation by
artists can help to explore, question and advance videogames as
a medium, which is perhaps a more worthy goal than striving for
their legitimation as art by the ickle art world. But their potential inluence in promoting videogame’s acceptance into art institutions is important to acknowledge, especially for those who
believe and ight for that artistic status, in order to further the
discussion and effect change.
Acknowledgements.
The core argument in this essay was initially developed as a talk
proposal by the same name, and accepted for presentation at
the Indiecade East 2015 conference, on February 13 – 15, at the
Museum of Moving Image in New York City.
References
Alexander, Bryan. “Antecedents to Alternate Reality Games,”
International Game Designers Association (IGDS) white paper, 2006.
Apperley, Thomas H. “The body of the gamer: game art and gestural
excess,” Digital Creativity Vol. 24, No. 2, 145-156, 2013.
Atkins, Robert. ArtSpeak: A Guide to Contemporary Ideas, Movements, and
Buzzwords, 1945 to the Present. New York / London: Abbeville Press
Publishers, 2013.
Bessa, Antonio Sergio. Öyvind Fahlström: The Art of Writing. Evanston:
Northwestern University Press, 2008.
Bittanti, Matteo. “Game Art. (This is not) A Manifesto. (This is) A
Disclaimer,” in Gamescenes: Art in the Age of Videogames, edited by
Matteo Bittanti and Domenico Quaranta, 7-14. Monza: Johan & Levi
Editore, 2009.
Bittanti, Matteo and Mathias Jansson. “Interview: Miltos Manetas,
The First Machinima-Maker,” GameScenes: Art in the Age of Videogames,
July 31, 2010, accessed December 12, 2014, http://www.gamescenes.
org/2010/07/interview-miltos-manetas-the-irst-machinimamaker.html
Chayka, Kyle. “Cory Arcangel’s Surrealist Super Mario,” Hyperallergic,
May 10, 2011.
Falcão, Leo, André Neves, Geber Ramalho, Fábio Campos, and Bruno
Oliveira. “Game as Art: A Matter of Design,” in Actas da 3ª Conferência
de Ciências e Artes dos Videojogos, edited by Rui Prada, Carlos Martinho
and Pedro Santos, 165-170, 2010, accessed December 10, 2014, http://
gaips.inesc-id.pt/videojogos2010/actas/Actas_Videojogos2010.html
167
Flanagan, Mary. Critical Play: Radical Game Design. Cambridge: MIT
Press, 2009.
Gavin, Erin. “Press Start: Video Games and Art,” Valley Humanities
Review, Spring 2014.
Heddaya, Mostafa. “A Conversation with Paola Antonelli about MoMA’s
Video Game Collection”, Hyperallergic, July 3, 2013.
Jansson, Mathias. “Interview: Orhan Kipcak (ArsDoom, ArsDoom II)
(1995-2005),” GameScenes: Art in the Age of Videogames, November
4, 2009, accessed December 13, 2014, http://www.gamescenes.
org/2009/11/interview-orphan-kipcak-arsdoom-arsdoom-ii-1995.html
Jansson, Mathias. Everything I Shoot Is Art. Brescia: Link Editions, 2012.
Jenkins, Henry. “Games: The New Lively Art,” in Handbook for Video
Game Studies, edited by Jeffrey Goldstein. Cambridge: MIT Press, 2005.
Krischer, Olivier. “Multiplayer Online Cultural Revolution: Feng Mengbo,”
ArtAsiaPaciic, Jul/Aug 2009.
Miranda, Carolina A. “Let the Games Begin,” ARTnews, April 2011.
Mitchell, Grethe, and Andy Clarke. “Videogame Art: Remixing,
Reworking and Other Interventions,” Utrecht University and Digital
Games Research Association (DIGRA), 2003.
Parker, Felan. “Authorship, Ambiguity, and Artgames,” The Canadian
Game Studies Association (CGSA), 2013.
Perron, Bernard, and Mark J.P. Wolf (eds.). The Video Game Theory
Reader 2. New York and London: Routledge, 2009.
Pichlmair, Martin. “Pwnd – 10 Tales of Appropriation in Video Games”,
paper presented at the Mediaterra conference, Athens, October 4-10,
2006.
Spears, Dorothy. “I Sing the Gadget Electronic,” The New York Times, May
19, 2011.
Tavinor, Grant. The Art of Videogames. United Kingdom: Wiley-Blackwell,
2009.
xCoAx 2015
Computation
Communication
Aesthetics
and X
Glasgow
Scotland
2015.xCoAx.org
Creative Surrogates:
Supporting Decision-Making
in Ubiquitous
Musical Activities
Damián Keller
Amazon Center for Music Research – NAP, Rio Branco, AC, Brazil
[email protected]
Evandro M. Miletto
IFRS, Porto Alegre, RS, Brazil
[email protected]
Nuno Otero
Linnaeus University, Växjö, Sweden
[email protected]
Keywords: ubiquitous music, creativity-centered design,
graphic-procedural tagging, creative surrogates.
We present results of two studies that address creative decision-making through the usage of local resources. Adopting an
opportunistic design approach (Buxton 2007; Botero et al. 2010;
Visser 1994), both studies use off-the-shelf infrastructure to
identify support strategies that deserve further implementation
efforts. Both studies yielded complete creative products, consisting of a mixed-media performance artwork and a multimodal
installation. We discuss the procedures employed to assist the
decision-making processes with an eye on the development of
new creativity support metaphors. The examples serve to frame
the discussion on human-computer interaction and musical creativity in the context of ubiquitous music making.
169
1 Introduction
This paper deals with the convergence of interaction design techniques and artistic practices. Analyzing two design projects –
involving multiple iterations - the impact of creative practices on
interaction design methods are discussed. On the one hand, the
development of technological support for creative practices opens
up new opportunities for artistic application. On the other hand,
the concepts unveiled through the study of creativity expand the
potential for participation in artistic practices. One of the theoretical and methodological perspectives exploring this convergence is ubiquitous music research (Keller et al. 2014a).
Recent advances on creative practices in information technology (Mitchell et al. 2003; Shneiderman 2007; Shneiderman et
al. 2005) indicate the need to change the focus from technological product development to support for meaningful experiences
(Rogers 2014). Creative computing highlights the non-utilitarian
aspects of technology inserted in everyday life (see for example,
Bødker 2006; Harrison et al. 2007). More recently, the aesthetics
interaction design perspective broadens the range of the application of artistic endeavors to assess the results of human-computer
experiences (Keller et al. 2014b; Löwgren 2009).
In this paper, we discuss exploratory strategies in creativity-centered design (Lima et al. 2012) as a way to encompass
research methods for the study of creative procedures that yield
relevant and original products. We analyze two artistic applications of the graphic-procedural metaphor, focusing on its limitations and its applicability within the realm of artistic creativity.
The results show good potential for enhancing audience participation in music making and for expanding the available spaces
for creative action beyond collocated activity. The graphic-procedural metaphor allows for the use of visual elements to organize
temporal parameters asynchronously. This metaphor is based on
close relationships among local material resources and creative
decisions, mediated by the action of the participants. This mediation mechanism puts the focus on the active participation of
stakeholders in the creative act, emphasizing the human side of
creativity support.
2 Ubiquitous Music and Creative Design
A key aspect of the creative process is the choice or development of
technological support. This task involves inding out how material, cognitive and social factors inluence the strategies applied
in decision-making. Our overall proposal is to expand the study of
creativity to the context of everyday actions. More speciically, we
170
want to insert music creation in settings that were not originally
designed for music making. A irst step has already been taken by
experimental studies in ubiquitous music. Ubiquitous music - or
ubimus (Keller et al 2014a) - emerges as a theoretical and methodological alternative to the approaches attached to the European
instrumental musical tradition of the nineteenth century (Tanaka
2009; Wessel and Wright 2002). Ubiquitous musical activities generally use distributed resources and involve multiple stakeholders
with various levels of expertise. While ubiquitous music requires
expanding the access to creative activity by laypeople, the acoustic-instrumental paradigm demands a strict separation between
novices and musicians-performers.
Most research in musical interaction has focused on the validation of instrumental music tools (Tsandilas et al. 2009) or on
simulations and extensions of musical instruments (International Conference for New Instruments for Musical Expression –
NIME). By grounding the design choices on instrumental music
molds, the researcher reduces the participant’s role to a consumer
of ready-made procedures predeining the aesthetic perspective
to be adopted. Thus, the creative choices are established by the
research design, rather than by the subjects-participants. Hence,
the results relect the methodological choices of the experimental
design restricting the participation of the creator to a predeined
creative path. This caveat has been term early domain restriction
(Keller et al. 2011b).
The procedures that have emerged for supporting ubiquitous
musical activities encompass four inter-related stages: deining
strategies, planning, prototyping and assessment (Pimenta et al.
2014). Given the iterative and participatory nature of our design
practice, these four stages are not necessarily successive and each
stage may be repeated several times during the development cycle.
Our practice suggests three emergent methodological trends that
may be used as general guidelines to deine design strategies: (a)
avoid early domain restriction; (b) support rapid prototyping; and
(c) foster social interaction. After the initial choice of design strategies, planning activities may be pursued in the form of exploratory studies. The objective of this design phase is to obtain a set
of requirements and to gather initial feedback on user expectations. Once the minimal requirements and the overall objectives
of the project have been set, simple prototypes can be built to
allow for more detailed on-site observations. Prototypes do not
need to be complete software solutions. This stage’s objective is
to gather useful information on speciic aspects of the musical
experience. Thus, sonic outcomes can be handled by simpliied
signal-processing tools (Lazzarini et al. 2012; 2014b) or by Wizard of Oz simulations (Gould et al. 1983). Design issues of the
171
adopted interaction approach can be studied by using software
mash-ups, verbal scores, aural scores, graphic scores, storyboards,
videos and animations. The focus is fast turnover, not reined
implementations.
To be able to orchestrate these activities is a tricky business. So
no ready-made recipes are available yet. Furthermore, to guide the
choice of technologies and interactivities that support the on-going creative activity some sort of assessment is necessary and this
should be as closely tied to the activity as conditions permit. Both
objective data – related to the subjects’ proile, activity variables,
environmental variables and technological infrastructure – and
subjective data – the subjects’ feedback on various aspects of the
experience – should be gathered. Through comparisons among
various conditions, it is possible to evaluate the impact of the
material and the social resources on the participants’ performance. These results feed the previous design phases, pointing to
updated strategies and prototype reinements.
2.1 Creativity support metaphors
Creativity support metaphors are at the contact point between
musical interaction metaphors (Pimenta et al. 2012) and the
proposals laid out in interaction aesthetics applied to creativity
(Keller et al. 2014b). The focus is the sustainable support of creative activity, covering on the one hand the activity results - the
creative products and the generation of resources - and on the
other hand, dealing with the procedures required to achieve creative outputs - the creative or design processes. This latter aspect
differentiates the metaphors for creative action from musical
interaction metaphors. While musical interaction metaphors provide the necessary support for novices and musicians to be able
to achieve musical results, metaphors for creative action target
the increase of the participants’ creative potentials. This creative
potential can impact the intended and the unintended products
of the activity. Hence, the main goal of the support is not the creative product itself, but the ability of the agents to take advantage
of the resources available at the site of the activity.
Classic examples of support metaphors for musical creative
activities are the proportional notation systems. In this case we
are talking about proto-metaphors, since they don’t reach the level
of lexibility necessary to enable activities in everyday contexts.
In proportional notation, the visual representation is directly correlated to the sound parameters (Cope 1974; Keller and Budasz
2010). For instance, a point represents a sound event with a short
duration. A long line indicates a sustained sonic event. One of the
limitations of proportional notation is the distance between the
172
perception of the spatial representation of the event and the perception of time. Approximate interpretations of duration as absolute time are possible for expert musicians (e.g., “play a 20-second
event as represented by a line on a track”). This is not the case for
novices. As it will become clear in the following sections, graphic-procedural metaphors - exempliied in the audiovisual trackers
(akin to sequencers with a dynamic temporal display) – may provide a path to overcome this constraint.
2.2 Study 1: Creative surrogates in Tocalor
An interaction technique involving the use of photographs and
video footage, inspired in the tradition of the experimental-music
graphic scores was devised: graphic-procedimental tagging (Melo
and Keller 2013) . A case study was carried out to developed support for the creation and performance of the composition Tocalor,
for instrumental duet and a electroacoustic stereo track.
Fig. 1 Local data used in the Tocalor
study.
The creative product is a visual score 5:20-minutes long. The
performance materials consist of two wind or string instruments
(e.g., clarinet, violoncello or viola), a video projector, a projection screen and a stereo playback system. Tocalor was presented
during the International Symposium on Music in the Amazon
(SIMA 2013, Rio Branco, Brazil). Two audiovisual data-gathering
sessions were done with a consumer-level digital camera. An initial screening of the multiple visual materials was done adopting
the criteria suggested by Backhouse (2011). Six images presenting
patterns of lines or dots that could be easily adapted to parametric musical reference systems were chosen (see igure 1). Applying a second iltering criterion, four images with clear contrasts
between different colors were identiied. In particular, igures 12
and 13 showed a large generative potential, with patterns close to
173
those observed in multiple-scale self-similar systems (e.g., chaotic and fractal systems) (Malt 1996).
The selection strategy applied in Tocalor prioritized visual
objects that could provide an intuitive dimensioning within a
close range to the size of the human body. Picture 16, chosen as
a basis for the composition, is simple yet it has enough visual elements that can be used as musical data (see picture 16 in Figure
1). The arrangement of lowers on the horizontal axis suggests an
approximate mapping to duration, through matching the spatial
position of the lowers to the temporal position of the event (Melo
and Keller 2013). The vertical axis can be interpreted as pitch.
Colors can be mapped to timbre or sound source. Other sound
parameters, such as intensity, can be related to thickness or size
of the visual cue. To maintain the material accessible to performers, we decided to adopt pitch as the only parameter displayed
on the vertical axis, scaling the frequency values from lower to
higher, from the bottom to the top of the igure.
After listening to the local sound material, it was decided that
it was suficiently interesting on its own, so other than simple
editing, no sound processing was applied. From the database of
collected sound samples, materials recorded during the morning session were chosen. The selected recording has a complex
texture featuring bird singing and other biophonic events. Most
sounds are concentrated on a frequency band higher than the
pitch range of wind instruments. Therefore, the soundtrack works
as an independent layer that can be used to inform the way the
visual events can be interpreted. This material served as a basis to
deal with the visual parameters, highlighting the relationships of
complementarity between local sound materials and local visual
resources.
Having established the visual and audio materials to be
employed during the compositional process, we created a reference system to generate musical data from the collected visual
data. Firstly, we eliminated the background colors, yielding a neutral gray-scale base. Subsequently, a selective ilter was applied to
restore color to the red lowers. A similar procedure was applied
to recover the yellow lowers. The resulting pattern featured lines
and dots in two colors (igure 2). In order to interpret the graphics
as instrumental performance parameters, we applied a reference
grid on top of the bi-colored patterns. As a compositional choice,
all events were restricted to the instrumental range of the clarinet. Given the continuous distribution of visual elements on the
vertical axis, the position of the elements could simply be interpreted as micro tonal changes. Thus we avoided the use of complex symbols to indicate subtle inlexions of tone. Nevertheless,
in order to introduce time as a control parameter, it was necessary
174
to implement a format that supported the projection of timebased frames. Considering that most devices can handle multimedia material, we decided to adopt video as the delivery format.
This choice extends Nance’s (2007) aural scores to the realm of
Fig. 2 Reference system and tracker
in Tocalor.
the audiovisual.
First, we cropped the image as a series of rectangles. Each
rectangle corresponds to a frame within the visual sequence. To
provide a cue of the passage of time, we added a color-changing
tracker on top of the reference lines. Duration was mapped as a
1:1 proportion between position and execution. Hence, the positions of the events are shown exactly at the time they have to be
performed, avoiding ambiguities in the relation notation-performance. The audiovisual score was rendered as a standard video
ile and, to allow for distributed performances, it was shared on
YouTube (Melo and Keller 2013).
Creative results
The graphic-procedural metaphor supports the use of visual elements to organize temporal parameters synchronously. As a case
study, we described the creation and performance of the multimedia work Tocalor for two clarinets and stereo electroacoustic
soundtrack. The resulting creative surrogate is a 5:20-minute
audiovisual score. Location-speciic visual material is anchored
(Keller et al. 2010) through a time-based bi-dimensional reference
system. The visual anchors are used as performance instructions
within the audiovisual score. The piece was presented in November 2013 at the Amazon International Symposium on Music (Melo
and Keller 2013).
The application of the graphic-procedural metaphor was
described in the previous section as an instance of a general
strategy to support collocated, asynchronous creative decision-making. The use of local resources indicated a viable strategy for opportunistic design (Hartmann et al. 2008; Keller et al.
2013; Visser 1994). Creative surrogates - in the form of visual
data - were used to assist the compositional procedures. Through
the application of a time-based reference system, by means of an
175
audiovisual tracker, visual features were converted into instructions that yielded sonic events.
Fig. 3 Creativity support metaphors
for synchronous decision-making:
audiovisual trackers.
Figure 3 summarizes the low of information applied during the
Tocalor study. The irst stage involved gathering visual data on
site. This data was externalized as creative surrogates by means
of graphic transformations. The adopted reference system provided a mechanism to map visual features of the materials to sonic
events. An AV tracker was used to guide the musicians’ interpretation of the visual elements, providing support for the collocated,
synchronous musical activity.
2.3 Study 2: Creative surrogates in Palaito 1.0
A ten-month design study targeting the observation of creative
artistic practice by a video-artist, a sculptor and a composer,
yielded the multimedia installation Palaito/Palaita/Homeon-stilts 1.0 (Capasso, Keller and Tinajero 2012). Asynchronous,
ubiquitous group activities were carried out by the three subjects through lightweight, off-the-shelf infrastructure. Data was
extracted from a virtual forum and a ile repository (see next section, procedures, to see the nature of the data collected). The analysis of the creative exchange indicated cycles of activity alternating between relection, exploratory action and product-oriented
action. Technological support was incorporated through cycles
of demand-trial-assessment, embracing a parsimonious approach
to the adoption of new information technology objects. Through
the adoption of an opportunistic design strategy (Keller et al.
2013; Visser 1994), priority was given to repurposing of existing resources as opposed to development from scratch. Creative
results included 19:30 minutes of sonic material and video footage, and three 5x8x3-meter raw-wood sculptures.
Settings and materials
This design study avoided the introduction of disruptive environmental factors by adopting the artists’ usual working settings. Audiovisual source materials were gathered by the authors
through an ecocompositional journey that encompassed several
locations in the Ecuadorean and Peruvian Amazon tropical forest (Keller 2004). These raw materials served as anchors (Keller et
176
al. 2010), for the elaboration of the sculptural, visual, and sonic
elements utilized in the piece. The experience of the journey provided the social grounding for the conceptual relationships later
developed in the multimodal installation (Keller et al. 2014c).
Procedures
During a ten-month period, the three subjects’ creative activities
were monitored using two tools: a virtual forum and a ile-exchange repository. The creative exchanges were classiied into
four distinct types of activity (Keller et al. 2014c): argumentation
(a form of dialogic activity involving verbal exchanges), relective
activity (when no material resources were exchanged), epistemic
activity (exploratory actions targeting increased knowledge to
inform decision-making) and enactive activities (actions that
impact material resources and products). Although a detailed
description of the classiications made and the overall exchange
processes are beyond the scope of this present paper we can
generically give and overview of nature of the data collected and
analyzed. Argumentation was done mostly through asynchronous dialogues among the stakeholders (only two encounters
were carried through video-conference). The observed exchange
of textual, visual and sonic materials enabled the participants
to explore further the ideas under consideration and can also
be viewed as a form of dialogue complementary to the process
of argumentation referred to before.. Enactive activity involved
the exchange of material that was intended to be part of the
work. Therefore, only the materials that were approved through
an argumentation cycle of proposals and commitments and that
were labeled as acceptable creative products by at least one of the
artists were considered to be the outcomes of enactive activity.
Fig. 4 Creativity support metaphors
for asynchronous decision-making:
designing AV sketches.
The procedural depiction of the Palaito study is structurally
similar to study 1. Local data, in this case representations of
source sounds and footage, were shared through creative surrogates. The AV sketches provided a temporal reference system for
the asynchronous decision-making process. Given the collective
character of the endeavor, a common reference system becomes a
requirement. Local decisions can only be made if the stakeholders have access to the status of the other participants. This study
made use of volatile resources – in the form of AV sketches – to
177
increase the lexibility of the exchange, reducing its ecological
impact.
Creative results
The study yielded the multimedia installation Palaito/Palaita/
Home-on-stilts. Its irst exhibit was held at the Floor4Art venue in
Manhattan, New York. The exhibit took place during the month
of November 2012 and ended with a closing gathering on December 1. The second exhibit took place in Denver, CO, USA, at the
Museum of the Americas from June to September 2013.
The sculpture featured three 5x8x3-meter metal and wood vertical structures hanging from the ceiling and placed on the loor of
the installation space. Three audiovisual tracks, lasting 6:30 minutes each, were played as loops on two stereo and one mono playback modules. The single-track module consisted of a DVD-player
and a directional speaker (house 3). The speaker was attached to
the ceiling, pointing straight downward, and the sound beam was
adjusted to span a radius of approximately one meter, creating
an isolated sound ield. The video footage was displayed on a 10”
LCD screen. The two stereo modules featured video projectors
attached to the ceiling, facing opposite walls (houses 1 and 2).
Two DVD-players sent audio to two sets of speakers hanging from
the walls at a height of 2.5 meters, matching the locations of the
projected videos.
The layout of the installation was designed to allow the visitors to walk freely within the gallery space. Consistently with
other ecologically grounded creative endeavors (Keller 2000), the
actions of the visitors were considered a central component of the
artwork experience. Depending on the locations of the participants, different combinations of visual and sonic content were
available. The house 1 module deined a sound ield constrained
to the sound beam area. Thus, the listeners had to be standing in
front of the module to access the sounds. The sound ields corresponding to house 2 and 3 were audible throughout the gallery
space. But given the different distances from the sources, visitors
were free to design their own mixes by exploring the multiple perspectives afforded by the space.
3 Implications for the development
of creativity support metaphors
We discussed two exploratory design studies - involving complete
creative cycles - which yielded public presentations of artistic
products. The irst study targeted the use of local visual resources
to produce audiovisual trackers for a mixed media performance.
178
The deployment of the creativity support metaphor graphic-procedimental tagging called for the participation of two musicians
who employed the visual data – structured as an audiovisual
score – as continuous pitch and onset-duration parameters. Execution time was directly correlated to the spatial position of the
tracker on the score. The lowers’ colors extracted from the original picture – pink and yellow – were repurposed to separate the
instrumental sources. The sources were chosen ad libitum by the
musicians. Pitch content was indicated by the distribution of the
lowers’ colored markings on the vertical axis, dynamics being
deined by the markings’ widths. An unprocessed recording done
on site, following the traditional soundscape methods (Truax
2002), was used to deine the total duration of the piece.
The second project, a large-scale installation commissioned
to the Capasso+Keller+Tinajero Collective by the II Biennial on
Latin American Art, was presented at the Denver Museum of
Latin American Art and at the Floor4Art Studio Space in Manhattan, New York. The artwork featured three sculptural objects
and three video and audio tracks that made use of ecologically
grounded techniques to process Western Amazon audio and visual
footage. The layout of the installation was designed to foster an
active engagement with the multimodal elements of the piece.
Visitors were encouraged to walk through the space to experience
multiple combinations of sound ields. Following an ecologically
grounded creative practice (Burtner 2005; Gomes et al. 2014;
Keller 2000), the actions of the visitors were used to support the
decision-making processes that shaped the aesthetic experience.
The results yielded by both studies indicate a recurring strategy
in ecologically based creativity support metaphors. The projects
discussed in this paper employed creative surrogates as material
resources to support aesthetic decisions. The comparison of the
processes involved in the two studies reported strongly suggest
the existence of two distinct modes of usage of creative surrogates for the scaffolding of the different decisions involved in
the creative processes. The irst model is tied to the synchronous
nature of the musical activities under scrutiny. Synchronous
musical activities demand a single temporal representation to
interpret the visual data. In Tocalor, this visual data was used as
a trigger for human performance of musical events by means of
an audiovisual tracker. Contrastingly, audiovisual sketches serve
as material proxies for distributed resources, supporting decision
making through a shared symbolic representation of time that
can be accessed remotely by all stakeholders. In Palaito, asynchronous activities featured audiovisual sketches as creative surrogates. While the latter model can handle the participation of
non-musicians through unstructured exploratory actions within
179
the installation space, the former model necessarily targets musicians that have the expertise to use the AV trackers’ visual information as a guide for bodily actions.
Given the use of local resources as materials for creative actions,
both models abide by the directives of ecologically grounded creative practice (Keller 2012; Keller et al. 2014c). Nevertheless, only
the audiovisual-sketch model meets the usability demands of
ubiquitous music ecosystems (Lazzarini et al. 2014b) providing
support for casual, untrained users. Audiovisual trackers require
trained musicians that can synchronize their actions to complex
visual cues with little look-ahead time. This type of decision-making activity places high demands on cognitive resources, hence
it probably demands automatic mechanisms that are typical of
expert performance (Shanteau et al. 2002). AV trackers it the
narrow view on embedded-embodied musical cognition that links
musical activity exclusively to bodily actions (Nijs et al. 2009).
In order to enhance the range of applications of AV trackers for
everyday usage, some adjustments are necessary.
Firstly, both collocated and distributed musical activities need
to be supported. By incorporating synthesized sounds through
soniication techniques (Serain et al. 2011), AV trackers do not
need to rely on collocated human actions for sound rendering.
Remote stakeholders may assess the musical results by making
local changes to the AV tracker. A shared consistent representation – akin to a musical prototype (Miletto et al. 2011) – may
relect the stakeholders’ proposals. To avoid intensive usage of
bandwidth, rather than video formats, the system needs to support local sound rendering. In this scenario, all data exchanges
may be done using standard still images.
Secondly, temporal synchronization among remote resources
can only be accurate through asynchronous mechanisms (see
Barbosa 2010 for a discussion on the limitations of traditional
performance approaches to network-based musical activities).
Sonic events do not need to be synchronous, they only need to be
perceived as synchronous. This subjective perception of synchronicity can be attained by aligning the events to the local clock. A
it metaphor is provided by the theory of relativity: there are as
many different times as there are space-time reference systems.
For example, two stakeholders participating at a local (A) and at
a remote location (B) produce two sequences of events. Stakeholder A adopts clock A as her frame of reference to generate a
sequence A. Stakeholder B adopts clock B to produce a sequence
B. For instance, in order to synchronize the sequences A and B,
the remote clock needs to be adjusted to the local clock. Thus, if
clock B is slower than clock A, sequence B needs to be accelerated
to it sequence A’s temporal frame. Also, sequence B’s onset needs
180
to be aligned to match the onset of sequence A. For AV trackers,
this implies a two-stage procedure: (1) the local system calculates
the difference between clock A and clock B yielding a time-difference index; (2) this index is used to adjust the speed of the AV
tracker. In a hypothetical scenario in which the infrastructure
handles the visual information and the meta-data separately, no
exchange of audio or video material is necessary. Hence, soniication-based audiovisual trackers can bypass the requirements of
computationally demanding audio-event recognition systems.
4 Conclusions
The two exploratory design studies reported in this paper yielded
two instances of creative surrogates as viable support mechanisms for creative musical activity. Audiovisual trackers can be
used to support synchronous, collocated decision-making activity. Audiovisual sketches are useful for asynchronous activity or
when stakeholders and resources are not collocated. AV trackers demand domain-speciic skills that may not be attainable by
casual users. Contrastingly, AV sketches have good potential for
untrained stakeholders. Thus, they enlarge the palette of support
techniques for everyday creative practices.
Acknowledgements.
This research was partially funded by CNPq grants 455376/20123, 407147/2012-8, by the Lower Manhattan Community Council, New York, USA (Artist Grants 2012, 2013), and by Maynooth
University, Ireland.
References
Backhouse, J. “Chi-ca-go [Live vocal plus electronics work]”, Chicago, IL,
USA., http://www.jedbackhouse.com/ma-thesis.html, 2011.
Barbosa, Á. “Network musical performace (Performance musical em
rede)”, in Keller, D. and Budasz, R., eds., Goiânia, GO: Editora ANPPOM,
2010, pp. 180-200.
Bødker, S. “When second wave HCI meets third wave challenges”,
in ‘Proceedings of the 4th Nordic Conference on Human-Computer
Interaction: Changing roles’, New York, NY: ACM, 2006, pp. 1-8.
181
Botero, A., Kommonen, K.-H. and Marttila, S. “Expanding design
space: Design-in-use activities and strategies”, in Durling, D., Bousbaci,
R., Chen, L.-L., Gautier, P., Poldma, T., Roworth-Stokes, S. and
Stolterman, E., eds., ‘Proceedings of the DRS 2010 Conference: Design
and Complexity’, Montreal, Canada: DRS, 2010.
Burtner, M. “Ecoacoustic and shamanic technologies for multimedia
composition and performance,” Organised Sound (10), 2005, pp. 3-19.
Buxton, W. Sketching User Experiences: Getting the Design Right and the
Right Design, New York, NY: Elsevier / Morgan Kaufmann, 2007.
Cope, D. New music composition, New Tork, NY: Schirmer Books, 1977.
Gomes, J., Pinho, N., Lopes, F., Costa, G., Dias, R., Tudela, D. and
Barbosa, Á. “Capture and transformation of urban soundscape data for
artistic creation,” Journal of Science and Technology of the Arts (6:1),
2014, pp. 97-109.
Gould, J., Conti, J. and Hovanvecz, T. “Composing letters with a
simulated listening typewriter,” Communications of the ACM (CACM)
(24:4), 1983, pp. 295–308.
Harrison, S., Tatar, D. and Sengers, P. “The three paradigms of HCI”,
in ‘Proceedings of the ACM CHI Conference on Human Factors in
Computing Systems (CHI 2007)’, 2007, pp. 1-18.
Hartmann, B., Doorley, S. and Klemmer, S. “Hacking, mashing, gluing:
Understanding opportunistic design,” Pervasive Computing IEEE (7:3),
2008, pp. 46-54.
Keller, D. “Compositional processes from an ecological perspective,”
Leonardo Music Journal (10), 2000, pp. 55-60.
Keller, D. “Paititi: A Multimodal Journey to El Dorado”, Stanford
University, Stanford, CA, USA, [AAI3145550], 2004.
Keller, D., Barreiro, D. L., Queiroz, M. and Pimenta, M. S. “Anchoring
in ubiquitous musical activities”, in ‘Proceedings of the International
Computer Music Conference’, Ann Arbor, MI: MPublishing, University
of Michigan Library, 2010, pp. 319-326.
Keller, D. and Budasz, R., (eds.) Music Creation and Technologies:
Interdisciplinary Theory and Practice (Criação Musical e Tecnologias:
Teoria e Prática Interdisciplinar), Vol. 2, Goiânia, GO: Editora ANPPOM,
2010.
Keller, D., Flores, L. V., Pimenta, M. S., Capasso, A. and Tinajero, P.
“Convergent trends toward ubiquitous music,” Journal of New Music
Research (40:3), 2011a, pp. 265-276.
Keller, D., Lima, M. H., Pimenta, M. S. and Queiroz, M. “Assessing
musical creativity: Material, procedural and contextual dimensions”,
in ‘Proceedings of the National Association of Music Research and
Post-Graduation Congress - ANPPOM’, National Association of Music
Research and Post-Graduation, Uberlândia, MG: ANPPOM, 2011b, pp.
708-714.
182
Keller, D., Lazzarini, V. and Pimenta, M. S., (eds.) Ubiquitous Music,
Vol. XXVIII, Heidelberg Berlin: Springer International Publishing,
2014a.
Keller, D., Otero, N., Lazzarini, V., Pimenta, M. S., Lima, M. H.,
Johann, M. and Costalonga, L. “Relational properties in interaction
aesthetics: The ubiquitous music turn”, in Ng, K., Bowen, J. P. and
McDaid, S., eds., ‘Proceedings of the Electronic Visualisation and the
Arts Conference (EVA 2014)’, London: BCS, Computer Arts Society
Specialist Group, London, UK, 2014b.
Keller, D., Timoney, J., Constalonga, L., Capasso, A., Tinajero, P.,
Lazzarini, V., Pimenta, M. S., de Lima, M. H. and Johann, M.
“Ecologically grounded multimodal design: The Palaito 1.0 study”, in
‘Proceedings of the International Computer Music Conference (ICMC
2014)’, Ann Arbor, MI: MPublishing, University of Michigan Library,
Athens: ICMA, 2014c.
Lazzarini, V., Costello, E., Yi, S. and fitch, J. “Development tools for
ubiquitous music on the world wide web”, in Keller, D., Lazzarini, V. and
Pimenta, M. S., eds.,’Ubiquitous Music’, Heidelberg Berlin: Springer
International Publishing, 2014a, pp. 111-128.
Lazzarini, V., Keller, D., Pimenta, M. and Timoney, J. “Ubiquitous
music ecosystems: Faust programs in Csound”, in Keller, D., Lazzarini,
V. and Pimenta, M. S., eds.,’Ubiquitous Music’, Heidelberg Berlin:
Springer International Publishing, 2014b, pp. 129-150.
Lazzarini, V., Yi, S., Timoney, J., Keller, D. and Pimenta, M. S. “The
Mobile Csound Platform”’Proceedings of the International Computer
Music Conference’, ICMA, Ann Arbor, MI: MPublishing, University of
Michigan Library, Ljubljana, 2012, pp. 163-167.
Lima, M. H., Keller, D., Pimenta, M. S., Lazzarini, V. and Miletto, E.
M. “Creativity-centred design for ubiquitous musical activities: Two
case studies,” Journal of Music, Technology and Education (5:2), 2012,
pp. 195-222.
Malt, M. “Lambda 3.99 (Chaos, et Composition Musicale)”, in Assayag,
G. and Chemillier, M., eds., ‘Proceedings of the 3rd Journées
d’Informatique Musicale (JIM96)’, Île de Tatihou, Basse Normandie,
France: JIM, 1996.
Melo, M. T. S. and Keller, D. “Tocalor: Exploration of the graphicprocedural metaphor in a mixed media artwork (Tocalor: Exploração
da marcação procedimental-gráica em uma obra mista)”, in Keller, D.
and Scarpellini, M. A., eds., ‘Proceedings of the Amazon International
Symposium on Music (SIMA 2013)’, Rio Branco, AC: EDUFAC, 2013.
Video 1: http://youtu.be/vnER4quM3hU. Video 2: http://youtu.be/
Ew9kPgtKKNs.
Miletto, E. M., Pimenta, M. S., Bouchet, F., Sansonnet, J.-P. and
Keller, D. “Principles for music creation by novices in networked music
environments,” Journal of New Music Research (40:3), 2011, pp. 205-216.
183
Mitchell, W. J., Inouye, A. S. and Blumenthal, M. S. Beyond
Productivity: Information Technology, Innovation, and Creativity,
Washington, DC: The National Academies Press, 2003.
Nance, R. W. “Compositional explorations of plastic sound”, Doctoral
Thesis in Music, De Montfort University, UK, 2007.
Nijs, L., Leman, M. and Lesaffre, M. “The musical instrument
as a natural extension of the musician”, in ‘Fifth Conference on
Interdisciplinary Musicology (CIM09)’, Paris, France: LAM, 2009, pp.
132-133.
Rogers, Y. “Mindless or mindful technology?”’Proceedings of the 2014
ACM Symposium on Engineering Interactive Computing Systems
(SIGCHI 2014)’, ACM, New York, NY, USA, 2014, pp. 241--241.
Serain, S., Franinović, K., Hermann, T., Lemaitre, G., Rinott, M.
and Rocchesso, D. “Sonic interaction design”, in Hermann, T., Hunt,
A. and Neuhoff, J. G., eds., ‘The Soniication Handbook’, Berlin: Logos
Publishing House, 2011, pp. 87-110.
Shanteau, J., Weiss, D. J., Thomas, R. P. and Pounds, J. C.
“Performance-based assessment of expertise: How to decide if someone
is an expert or not ,” European Journal of Operational Research (136:2),
2002, pp. 253-263.
Shneiderman, B. “Creativity support tools: accelerating discovery and
innovation,” Communications of the ACM (50:12), 2007, pp. 20-32.
Shneiderman, B., Fischer, G., Czerwinski, M., Myers, B. and Resnick,
M. “Creativity support tools: A workshop sponsored by the National
Science Foundation”, Technical report, Washington, DC: National
Science Foundation, 2005.
Tanaka, A. “Sensor-based musical instruments and interactive music”,
in Dean, R. T., ed.,, New York, NY: Oxford University Press, 2009, pp.
233-257.
Truax, B. “Genres and techniques of soundscape composition as
developed at Simon Fraser University,” Organised Sound (7:1), 2002, pp.
5-14.
Tsandilas, T., Letondal, C. and Mackay, W. E. “Musink: composing
music through augmented drawing”’Proceedings of the SIGCHI
Conference on Human Factors in Computing Systems’, New York, NY:
ACM, 2009, pp. 819-828.
Visser, W. “Organisation of design activities: opportunistic, with
hierarchical episodes ,” Interacting with Computers (6:3), 1994, pp.
239-274.
Wessel, D. and Wright, M. “Problems and prospects for intimate musical
control of computers,” Computer Music Journal (26:3), 2002, pp. 11-22.
xCoAx 2015
Computation
Communication
Aesthetics
and X
Glasgow
Scotland
2015.xCoAx.org
The Emergence
of Complex Behavior
as an Organizational
Paradigm for Concatenative
Sound Synthesis
Peter Beyls
CITAR - UCP Porto, Portugal
[email protected]
Gilberto Bernardes
INESC TEC, Porto, Portugal
[email protected]
Marcelo Caetano
INESC TEC, Porto, Portugal
[email protected]
Keywords: Audiovisual system, distributed agent system,
emergence, concatenative sound synthesis, interactive musical
improvisation.
Multi-agent systems commonly exhibit complex behavior resulting from multiple interactions among agents that follow simple
rules. In turn, complexity has been used as a generative and organizational paradigm in audiovisual works, exploiting features
such as behavioral and morphological complexity with artistic
purposes. In this work, we propose to use the Actor model of social
interactions to control a concatenative synthesis engine called
earGram in real time. The Actor model was originally developed
to explore the emergence of visual patterns. On the other hand,
earGram was originally developed to facilitate the creative exploration of concatenative sound synthesis. The proposed integration results in the emergence of complex behavior from the Actor
model acting as an organizational paradigm for concatenative
sound synthesis.
185
1 Introduction
Natural systems such as insect swarms, the immune system, neural networks, and even chemical reactions (Bak 1995, Kauffman
1995, Camazine 2003) are widely considered to exhibit complex
behavior arising from multiple local interactions among agents
following simple rules. The self-organizing behavior of social
animals (Reynolds 1987) has been used to explain certain social
interactions, including in human society (Ulanowicz 1979). Interestingly, the emergence of complex behavior in computer simulations of natural systems has been explored aesthetically as organizational paradigm in artistic settings such as dance (Tidemann
2007), audiovisual installations (Beyls 2012), sound and music
(Miranda 1994, Blackwell 2002, Caetano 2007), and sculpture
(Todd 1992), among others.
In contrast to top-down design in most cultural artifacts, natural systems exhibit patterns arising from multiple local interactions among individuals or entities that do not exhibit the patterns themselves. From the stripes of zebras to snowlakes and
termite mounds, pattern at the global level emerges solely from
interactions among lower-level components (Camazine 2003).
Much research in the discipline of artiicial life studies life-like
emergence in forms of synthetic biology (Langton 1997). Recent
work in artiicial chemistry (Dittrich et al. 2001) offers a wealth
of models for constructing emergent behavior. For example, the
idea of molecular interaction may successfully underpin complex
musical human-machine interaction (Beyls 2005). Various music
systems were built exploiting swarming behavior (Blackwell and
Bentley 2002) – a model irst formalized in the original locking
algorithm (Reynolds 1987). Miranda (1994), in turn, proposes to
use the patterns that emerge from cellular automata in music
composition. Caetano (2007) exploits the self-organizing dynamics of different algorithms inspired by biological systems to obtain
trajectories that drive sound transformations.
In this work, we propose to use the complex behavior that
emerges from a multi-agent system called the Actor model to
drive earGram, a concatenative synthesis engine, in real time.
The Actor model of social interactions uses the concepts of afinity and to iteratively displace the agents, called actors, to different
settings of social stress. The self-organizing nature of the Actor
model results in intricate visual trajectories followed by the actors.
These trajectories, in turn, are used as input to earGram, a concatenative sound synthesis engine. EarGram organizes a collection of sounds in the plane according to their intrinsic perceptual
qualities, such that neighboring sounds are more similar than
sounds that are far apart. Therefore, spatial trajectories result in
186
sonic trajectories that become gradual transformations along the
perceptual dimensions used to organize the sounds. The user can
choose the sound features corresponding to the dimensions of
the space, which results in different conigurations of the sounds
in the plane. Consequently, the same trajectory can have several
different sonic results.
Our goal is to build a system supporting non-trivial rewarding
human-machine interaction. In contrast to conventional linear
mapping, the user interacts with the Actor model indirectly by
changing the afinity and sensitivity values, which results in different dynamic conigurations. The system dynamics becomes the
organizational paradigm followed when exploring the conceptual
space of sonic results. The actors behave autonomously from the
speciication of simple local instructions, yet the system is open
to disturbance by an external human performer (HP), offering
fascinating aesthetic potential for human-machine interaction.
Then, a perception of life-like qualities becomes apparent, one
interacts with a quasi-unpredictable system while the structural
integrity of that system remains. Such a work suggests critical
consideration of the notions of interactivity, intricacy, participation and unpredictability.
This paper is further structured as follows, irstly we explain
the Actor model and its behavioral scope, then we address concatenative sound synthesis in earGram. Finally, the implementation
of a functional bridge between both components is presented. We
discuss the visual and sonic components of the system, including
aesthetic considerations and user interaction.
2 An Emergent Organizational Paradigm
Linear top-down planning and design suffers from a knowledge
acquisition bottleneck. In contrast, collective behavior commonly
presents self-organizing properties whereby pattern at the global
level emerges solely from interactions among lower-level components. Remarkably, even very complex structures result from the
iteration of surprisingly simple behaviors performed by individuals relying only on local information.
2.1 The Party Planner Model
Our implementation is inspired by the Party Planner Model (PPM),
developed by Rich Gold and documented in his seminal book The
Plenitude (Gold 2007). Imagine a party where each individual aims
to be physically close to people one likes and as far away as possible from people one dislikes. An individual’s level of unhappiness
is the perceived social stress impinging at a particular location
187
eq. 1
in physical space. Formally, given N actors, the level of unhappiness of actor A at index i is expressed in eq. (1) as the sum SA
of absolute values of the differences in ideal distance Di minus
actual distance Da for all (N-2) actors. An actor does not express
any social opinion towards itself, thus N-2 evaluations take place
starting from index 0.
Every person aims to minimize his/her level of unhappiness by
moving in space, to a neighboring spatial location, a few steps
away from the current location, potentially offering less social
stress. As a result, a person will relocate to his/her ideal distance
from every other person thus minimizing the total perceived level
of unhappiness.
In every process cycle, all actors consider eight alternative
directions to move, as depicted in Figure 1. A list of different
social tensions is computed from the observation of the grand
sums of impinging stress. Finally, the algorithm favors the direction to move implying the least stress of all eight directions. All
actors proceed according to the same logic. However, actions by
individual actors only observe local social concerns i.e. the evaluation of stress towards the closest neighbor. As a result, the
process proceeds as an animated sequence of globally complex
spatial conigurations. In addition, conlicting requirements may
contribute to highly non-linear behavior. For example, actor A
may prefer to be close to actor B while actor B aims to be far away
from actor A. Merging this local concern with impact from neighboring actors, complex following or push-pull oscillatory behavior
might emerge. One may think of PPM as a complex dynamical
system that, according to the speciication of particular social
preferences, will produce spatiotemporal patterns of considerable
intricacy.
2.2 The Actor Model
An extended version of the PPM called Actors has been used to simulate collective musical improvisation (Beyls 2010). In this work,
we propose to use the Actor model as organizational paradigm in
concatenative sound synthesis. In the Actor model, the user does
not control the system. Instead, the user inluences the outcome
of an otherwise self-organizing social system. In other words, we
interfere with the system’s innate behavior in two possible ways.
First, a HP virtually present interacts with the actors who, in turn,
acknowledge the HP’s social preferences. The HP interacts with
the system via a control interface (e.g., MS’s Kinect or Nintendo’s
Wii) that maps the actions into the space. The system is inluenced only locally but might entail the emergence of complex patterns. The other possibility is to have the HP conceptually outside
188
the actor society but able to adjust global parameter settings. So
the interaction happens via the parameter settings of the system.
The current implementation, reported here, documents the secFig. 1a Representation of the afinity
matrix.
Fig. 1b Representation of moving
actor.
ond method.
The dynamic scope of the system is conditioned by two parameters; (1) the afinities-matrix (Figure 1a) specifying the ideal distances of every actor towards every other actor and (2) a sensitivity
parameter, a value local to every actor. Sensitivity level is a single
scalar value, private to the actor. It speciies the actor’s sensitivity
to any other actor, irrespective of the sensitivity of the other actor.
Sensitivity conditions a distance threshold that in turn conditions
interactions with a list of temporary neighbors.
Intuitively, it is easy to see a connection between the range
of values and their diversity of values in the matrixes and the
complexity of the ensuing spatiotemporal behavior. For example,
given roughly equal values in the afinities-matrix, all actors will
relocate to be at equal distances.
In every process step, the afinities-matrix is consulted to compute a list of new potential positions according to igure 1b; eight
potential locations, at a 5-pixels radial distance, are considered
relative to the position of the actor. However, only actors that are
close enough are considered neighbors, that is, when their distance is within the sensitivity range currently expressed in the
sensitivity value of the perceiving actor.
A wide range of spatiotemporal phenomena is generated from
the speciication of individual matrix values. Such a control structure is aesthetically attractive because the HP has the impression
of interacting with an intricate system, whose behavior is only
partially understood. The causal link between matrix and behavior is non-trivial, however it is perfectly coherent and offers structural integrity. Although individual actor behavior is unpredictable, the system nevertheless offers a strong overall impression of
coherent performance.
189
2.3 Mapping between human performer and
system dynamics
Mapping aims to create speciic functional relationships between
gestural information and musical responses. Conventional
approaches to mapping are deterministic yielding predictable results. Mapping typically generates musical responses as
selected from a user-designed palette of options. For example,
circular gestures always map to loudness changes. Given an aesthetic orientation favoring unpredictability and surprise, the concept of deterministic mapping is problematic. The Actor model
suggests an alternative; the human performer interferes with the
parameters affecting system dynamics - not unlike Sal Marirano
commenting on him playing the SALMAR Construction: “it was
like driving a bus” (Chadabe 1997).
Fig. 2 Spatial conigurations
resulting from different parameter
settings for the Actor model
a) Local oscillations
b) Circular movement
c) Basins of attraction
d) Radial coniguration
Figure 2 displays a collection of 4 snapshots, momentary spatial conigurations as captured in a continuous animated process,
window size is 1000 by 1000 pixels. Social afinities are set at random in the range of 50 to 500 pixels, whereas actor sensitivity
is ixed (at a radius R = 5 pixels) in this experiment. Each image
displays the coniguration after 100 iterations (typically more),
190
illustrating how different complex spatial conigurations might
emerge from different afinities. Only the afinities-matrix is
occasionally slightly modiied while the process is running.
In Figure 2a, all actors coalesce into four speciic locally oscillating conigurations. The effect of the forces of attraction and
repulsion merges into a stable spatial pattern. Circular movement
is clearly seen in Figure 2b with parallel trajectories showing
evidence of attraction and repulsion cancelling out. Five major
islands of activity emerge in Figure 2c while a spatial explosion
occurs in Figure 2d.
Figure 3 illustrates the spatiotemporal behavior of 10 actors in
a total agency of 50. The vertical axis shows the position along
the X-axis and the horizontal axis shows iterations. We explore
the behavioral scope of the system through interactive modiication of the afinities-matrix. As the actors interact their trajectories oscillate between quasi-periodic and irregular. We end up
having a control structure of high plasticity, its spatiotemporal
complexity morphologically blending in the sound application to
be discussed in the next section.
The Actor system implies two presentation modes; as a largescale, projected real-time audiovisual installation and as a
machine-mediated solo performance. It is implemented in two
concurrent processes, agency behavior, parametric control and
visualization is written in JAVA. A second process handling sound
synthesis receives control data from the Actors via Open Sound
Control (Schmeder et al. 2010).
3 Musical Application
In this section, we briely introduce concatenative sound synthesis and earGram, the application used in this work. The discussion
covers how the sound features capture perceptual qualities of the
sounds, how to create sound spaces using the features as dimensions, and how different spatial conigurations of sounds result
from different dimensions.
3.1 Concatenative Sound Synthesis
Historically, concatenative sound synthesis (CSS) can be grouped
with other sample-based techniques such as micromontage and
granular synthesis which originated from the early musique concrète experiments. Briely, CSS creates “musical streams by selecting and concatenating source segments from a large audio database using methods from music information retrieval” (Casey
2009). To a certain extent, CSS can be understood as being an
191
extension of micromontage and granular synthesis towards a
higher degree of automation.
What’s unique about CSS in relation to other sample-based
techniques is the annotation layer of the segments database,
which not only provides the user with a good description of the
audio source content, but also allows him/her to adjust, organize and re-synthesize the temporal dimension of the source in
reined ways. Segment annotations include features automatically extracted and grouped into a single vector with the help of
low-level audio descriptors, in a similar fashion as the audio-annotation layer of the MPEG-7 standard (Kim et al. 2005).
The sonic output of CSS systems depends on the audio descriptors used to organize the audio segments. The descriptor values
deine the spatial coniguration of the segments, deining neighborhood relations and relative distances. For example, two segments might present similar loudness values at different pitches,
which would place them close together along the loudness dimension but far apart along the pitch dimension.
Fig. 3 Illustration of the
spatiotemporal behavior of the Actor
model. The vertical axis represents
the position of each agent, while
the horizontal axis represents the
iterations. The curves illustrate how
the trajectories can oscillate between
quasi-periodic and chaotic paths,
revealing intricate patterns.
3.2 EarGram
1 The software along with its
documentation and many sound
examples are available at: https://
sites.google.com/site/eargram/.
EarGram (Bernardes 2013, 2014) is an open-source and freely
available application created in Pure Data for the real-time creative exploration of concatenative sound synthesis (CSS).1 EarGram extends CSS, irst attributed to Schwarz (2000), with new
possibilities for generative audio by adopting strategies from both
algorithmic-assisted composition and music information retrieval
(MIR). The latter strategies are responsible for (i) segmenting an
audio stream into elementary units, (ii) describing the most relevant features of the segments, and (iii) extracting patterns from
the resulting collection of segments. Additionally, the system
unpacks MIR terminology and concepts to a more adapted usability for musicians by relying on musicological and psychoacoustic
192
theories, and presents most of processing stages of the systems
in an intuitive manner, mainly through visualizations. The set of
MIR tools adopted in earGram constitutes a valuable aid for decision-making during performance by revealing musical patterns
and temporal organizations of the database, which are then used
to represent audio in common algorithmic-assisted composition
techniques.
Fig. 4 2D plot of the speech sound
database used in musical component
of the system.
EarGram includes four generative modes: spaceMap, soundscapeMap, ininiteMode, shuffMeter, which cover a wide range of
musical applications, such as the automatic generation of soundscapes, remixes, and mashups, to cite a few. Of interest here is
the spaceMap mode, which is used to interact with the Actor
model adding a sonic layer that offers musical functionality. The
interface of spaceMap is shown in Figure 4 as a plane whose axes
can be assigned to single audio descriptors or linear combinations of them. For example, the vertical axis might be loudness
and the horizontal axis might be pitch. Each sound segment is
represented by a (square) point in space, and their spatial organization is deined by their sound qualities (as measured by the
descriptors). The visual representation of the database is used to
play sound segments in the descriptor space as spatial trajectories.
Hovering the mouse pointer (round point) plays the sound that is
closest in the space. So, in the example, sliding the pointer vertically upward would play sounds that are louder and horizontally
to the right would play sounds higher in pitch. Diagonal upward
right-hand movement would play sounds with increasing pitch
and loudness. While small movements synthesize similar sounding segments, larger movements pick sounds with greater sonic
193
differences. SpaceMap allows the creation of highly controllable
sonic textures driven by the user.
4 Using the Actor Model to Drive earGram
4.1 Speech Sounds as Metaphor for Social
Interaction
2 http://archive.phonetics.ucla.edu,
last access on 7 January 2015.
Following the concept behind Gold’s PPM’s and the Actors model,
we chose to work upon a database of multi-linguistic speech sounds
to better express the idea of social interaction in the musical component of our system. Our aim was to represent social interaction,
and particularly the afinity among individuals, by the perceptual
proximity of speech sounds. Meaning that if the Actors’ “society”
reaches a stable coniguration, the sonic response of the system
should reduce the amount of variation to a minimum and, on the
contrary, highly deviating conigurations should result in a high
level of sonic variation. Between these two poles there is a map
that regulates the degree of variation in the speech sounds.
The speech sounds were retrieved from the UCLA Phonetics Lab
Archive,2 which includes both native female and male speakers of
different languages, such as Bulgarian, Dutch, Estonian, Javanese,
Nepali, Portuguese, Zulu, among others. After some basic sound
editing to improve the sound ile quality, including ilters, equalization and noise removal, earGram automatically segmented the
collection of speech sounds into short snippets of 200 ms each.
Then, we solved the most crucial pre-processing stage of our
database creation: the selection of a set of audio descriptors to
represent our segments in the system.
The analysis of the database segments comprised two main
tasks. First, we manually restricted the set of available audio
descriptors to a sub-set of audio features that included: noisiness, pitch, brightness, spectral width, and sensory dissonance.
Then, we weighted the set of selected audio descriptors to adjust
their contribution in the feature space. Weights were automatically assigned according to the computed variance of each of the
selected descriptors. By reducing the number of audio descriptors
and weighting their contribution, we not only discarded redundant information for the analysis of speech segments, but also
enhanced the computation of their perceptual similarity, which
consequently improves their visual representation on the interface.
In order to plot the segments represented by their multidimensional vectors in a 2-D space, allowing to physically navigate their
representation, or for the purposes of this work, to map to the 2-D
visual representation of the Actor model to our space, we reduced
the dimensionality of the segments’ features vector to two
194
dimensions using the algorithm star coordinates, irst proposed
by Kandogan (2000) and used in the scope of CSS by Bernardes et
al. (2013). Figure 4 shows a 2-D plot visualization of the database
whose axes are a linear combination of the aforementioned audio
descriptors.
4.2 Integrating the Systems
After the database creation, we tackled the mappings between the
Actor model and earGram’s spaceMap, i.e. the visual and musical components of our system. In spaceMap, synthesis is typically
controlled by deining trajectories in earGram’s interface with
the mouse. EarGram then retrieves the closest unit to the mouse
position and synthesizes the selected segment with a Gaussian
amplitude envelope. In our work, we replaced the mouse control
by the position of each Actor in the space, given by its X and Y
coordinates. This rather simple mapping strategy is effective in
the sense that the segments plotted in the spaceMap interface
are organized according to their perceptual distance. Therefore,
Actors with high afinity values are close together in the descriptor space, so they will trigger similar sounding units, resulting
in afinity being related to perceptual similarity. A video with
an example of the inal system is available at https://vimeo.
com/118500562. In this example, there are 50 actors, the afinity
matrix was initialized with low values, and the sensitivities are
initialized all at 5 pixels.
Given the large amount of data sent to earGram via the OSC
protocol, we ran into both technical and aesthetic problems
related to the rate of transmitted data. Not only did the network
block information in unclear ways, but also large amounts of data
were incompatible with our sample-based technique, resulting in
a constant over saturated mass of synthesized grains. To minimize this problem, we adjusted both the video frame rate to 25
frames per second (the same rate at which the location of Actors
is computed) and imposed a different clock to control the rate of
data sent over the network (every 800 ms).
Given the large number of data points received every 800 ms, we
decided to store the Actors’ location in memory and sequentially
read them at equidistant time intervals within the 800 ms. Therefore, we hear a new segment every n ms, which equals the total
number of Actors divided by 800 ms. Finally, we added an extra
processing layer that looks at the overall stability of the Actors
in the space, computed by measuring the lux of information at
every received package of information, and mapped the value to
the wet-dry parameter of a spectral freeze audio effect based on
Paul Nasca’s Extreme Sound Stretch algorithm3 in earGram.
195
5 Discussion
3 http://hypermammut.sourceforge.
net/paulstretch/, last access on 7
January 2015.
A user engages with the proposed system mainly via modiication
of the parameters (global afinity matrix and local sensitivity values) in the Actor model, exerting inluence on the dynamic behavior of the system. This indirect method for inluencing the system
behavior has implication on both the visual and sonic components
of the system along two conceptual dimensions, level and extent
of activity. The level of activity is related to the displacement of
the actors, ranging from stationary to highly dynamic. The extent
of activity refers to the distribution of the actors in the plane,
which can vary between highly concentrated and spread out.
However, other decisions also inluence the sonic outcome,
such as selection and pre-processing of the sound source material, selection of the features used to deine the dimensions of
the space in earGram, and the afinities and sensitivities for the
Actor model. In general terms, the source material determines
the range of sonic possibilities. Speech sounds will produce a different outcome than instrumental, environmental, or synthetic
sounds. The features have a direct impact on the distribution of
the sounds in the plane in earGram. Changing the features will
reorganize the same sounds according to different perceptual
similarities, such that the same trajectory will generate a different sonic outcome. In this section, we will discuss the impact that
each decision has in the aesthetic result.
In general terms, the spatiotemporal behavior of the system is
determined by the level of social stress, which, in turn, depends
on the magnitude and homogeneity of the afinities and sensitivities. High afinities result in strong attraction between actors,
while low afinities generate repulsive forces. Homogeneity in the
afinity matrix also impacts the global dynamic behavior. Whenever the user sets all the afinities to the same value, the level of
activity decreases resulting in point attractor behavior. Heterogeneous afinity values entails complex dynamic behavior. The
sensitivity also plays an important role in the dynamic behavior
of the actors because it determines the radius R of inluence of the
afinity values a(n,m) between Actors An and Am (see ig. 1a). High
sensitivities force the actors to consider distant neighbors, while
low sensitivities cause the actors to only interact with nearby
neighbors.
For example, high magnitude homogeneous afinity values with
high sensitivity will likely result in all actors clustered in a point
because they are all highly attracted to one another. Low magnitude homogeneous afinity with low sensitivity will likely result
in a uniformly spread out coniguration across the plane because
all actors are equally repulsed by their nearby neighbors. Notice
196
that both scenarios result in low levels of activity because the
examples suppose a homogenous afinity matrix and nearly equal
sensitivity values. Highly dynamic complex behavior is commonly
achieved through heterogeneous afinities and sensitivities.
The sonic response depends on the level and extent of activity
as well. On the one hand, the level of activity is responsible for the
dynamic response of the system. Each spatial trajectory results
in a sonic trajectory that translates as temporal variation of the
corresponding sound texture. On the other hand, the extent of
activity inluences the diversity of the sonic response by exploring different regions of the sound space.
The level of social stress drives the visual and sonic components of the system in symbiosis. The more complex chaotic oscillatory behavior of the actors, the more heterogeneous is its sonic
response. Stable conigurations result in sound textures with little variation. In other words, the actors’ dispersion is related to
the variability or “spreadness” of the sound segments selection,
which equate with the level of coherence of the resulting texture, due to the organization of the segments on earGram’s feature space. In between the two poles a wide and virtually endless
range of possibilities exists.
Another interesting feature of the matrix-based control structure is the synthesis of smooth trajectories when one of more values
luctuates in the sensitivities-matrix. Since actors move through
the consideration of a step-by-step evaluation process, changes
gradually accumulate towards a spatial niche of lower social
stress—the pull towards the basin of minimum stress decreases
as a function of the distance of the actor from that location. In
addition, considering one actor, since all its neighboring actors
are all engaged in the same process, global behavior crystallizes
into trajectories of considerable plasticity—the system produces
smooth waves of spatiotemporal patterns. These smooth trajectories of actors in the visual domain are then mapped to the pointer
position responsible for selecting audio segments in earGram’s
organized database visualization. The resulting sonic feedback
matches the dispersion/cohesiveness and continuity of both the
overall visual representation and the trajectories of individual
actors. Furthermore, stable spatial conigurations of the Actors
society is further distilled into a blurred sonic texture obtained
through spectral “smoothing” and iltering. The longer the actors’
inactivity, the blurrier the texture becomes and the fewer spectral
peaks are synthesized, thus reinforcing in the sonic domain the
spatial coniguration of the visual component of the system.
A fundamental contribution of this work in relation to its previous version, or for this matter any related work in auditory display
and soniication, is the use of concatenative sound synthesis, an
197
innovative sample-based synthesis technique, at the core of the
software earGram. The integration of earGram with the Actor’s
model not only offered us more plastic and expressive sonic
results in relation to related approaches—which tend to focus
on additive, subtractive or physical synthesis models—but also
allowed us to better match the conceptual basis of the system
through the synthesis of speech sounds. By adopting a ixed database coniguration in earGram, we favored one robust solution
over a myriad of possibilities offered by the system. However, the
current integration of both systems allows a user to easily experiment with different audio sources or even different feature spaces
(i.e. database organization in the interface), while maintaining
the same structural mapping, interactive behaviour, and to a certain extend the aesthetic basis. While adopting a different audio
source has a greater impact on the sonic result, changing the feature space that organizes the audio segments database will offer
a lower degree of variability, which equates in musical terms to
the creation of variation of the same musical material. Ultimately,
the positive outcome of this work spurs experimentation on sample-based techniques driven by a-life behaviour.
6 Conclusion and Future Work
Multi-agent systems commonly exhibit complex behavior after
multiple local interactions following simple rules. The dynamics
of self-organizing systems has been extensively explored aesthetically in artistic settings. Here, we use the Actor model of
social interactions to control a concatenative synthesis engine
called earGram in real time. The self-organizing behavior of the
Actor model was designed to be aesthetically interesting visually,
exploring the space as complexity emerges from the interactions.
This visual complexity is used to aesthetically explore the feature
space in earGram, whereby spatial trajectories become gradually
evolving sonic textures. Trajectories are a powerful way to control earGram creatively because the spatial coniguration relects
perceptual relationships among the sounds. The Actor model
provides multiple trajectories, each controlling a sound texture
in parallel, which result in an intricate and ever-evolving sonic
tapestry.
User interaction is essential to explore the sonic result. Currently, the user interacts with the system by changing the parameters afinity and sensitivity that control the dynamic behavior
of the Actor model. We plan to enhance the interactive feedback
loop with a gestural device, such as MS Kinect. The gestures can
be used to change parameter values in real-time. The sonic feedback would be used as system response to the interferences. The
198
performer affects the visual and sonic output indirectly since the
gestures do not control the system coniguration, only the system
parameters. More interestingly, the human performer can use a
virtual presence device to interact directly with the actors. In this
case, the human performer becomes the external perturbation
that continuously upsets the states of equilibrium of the system
driven by aesthetic judgments.
Acknowledgements
Research reported here is supported by MAT, The Media Arts
and Technologies Project, NORTE-07-0124-FEDER-000061, and
inanced by the North Portugal Regional Operational Programme
(ON.2 O Novo Norte), under the National Strategic Reference
Framework (NSRF), through the European Regional Development
Fund (ERDF), and by national funds, through the Portuguese
funding agency, Fundação para a Ciência e a Tecnologia (FCT).
References
Bak, P. How Nature Works: The Science of Self-Organized Criticality. New
York: Springer, 1996.
Bernardes, G. Composing Music by Selection: Content-Based AlgorithmicAssisted Audio Composition. PhD dissertation, University of Porto, 2014.
Bernardes, G., Guedes, C. and Pennycook, B. Eargram: An application
for interactive exploration of concatenative sound synthesis in Pure
Data. In M. Aramaki, M. Barthet, R. Kronland-Martinet and S. Ystad,
(eds), From Sounds to Music and Emotions, LNCS, 7900, 110-129. BerlinHeidelberg: Springer, 2013.
Beyls, P. A Molecular Collision Model of Musical Interaction. In C. Soddu
(ed.), Proceedings of the Generative Arts Conference, Milan, Italy, 2005.
Beyls, P. Structural Coupling in a Musical Agency. In E. Miranda (ed.),
Artiicial Life and Music, Evanston, WI: A-R Editions, 2010.
Beyls, P. Autonomy, Inluence and Emergence in an Audiovisual
Ecosystem. In C. Soddu (ed.), Proceedings of the Generative Arts
Conference, Rome, Italy, 2012.
Blackwell, T., Bentley, P. Improvised music with swarms. In Proceedings
of the 2002 Congress on Evolutionary Computation, 2, 1462-1467, 2002.
Caetano, M., Manzolli, J. Von Zuben, F. Self-Organizing Bio-Inspired
Sound Transformation. Applications of Evolutionary Computing. LNCS,
4448, 477-487, 2007.
Camazine, S., Deneubourg, J-L, Franks, N., Sneyd, J., Theraulaz, G.,
Bonabeau, E. Self-Organization in Biological Systems. Princeton Studies
in Complexity, 2003.
199
Casey, M. Soundspotting: A new kind of process? In R. Dean (ed.), The
Oxford Handbook of Computer Music. New York, NY: Oxford University
Press, 2009.
Chadabe, J. Electric Sound, The Past and promise of Electronic Music. Upper
Saddle River, NJ: Prentice-Hall, 1996.
Dittrich, Ziegler & Banzhaf. Artiicial chemistries, A review. Artiicial
Life, 7(3): 225-275, 2001.
Gold, R. The Plenitude. Cambridge, MA: MIT Press, 2007.
Kandogan, E. Star coordinates: A multi-dimensional visualization
technique with uniform treatment of dimensions. In Proceedings of the
IEEE Information Visualization Symposium, 2000.
Kaufmann, S. At Home in the Universe: The Search for the Laws of SelfOrganization and Complexity. Oxford University Press, 1995.
Kim, H.-G., Moreau, N., & Sikora, T. MPEG-7 Audio and Beyond: Audio
Content Indexing and Retrieval. Chichester, UK: John Wiley & Sons, 2005.
Langton, C. Artiicial Life: An Overview. Cambridge, MA: The MIT Press,
1997.
Miranda, E. Music composition using cellular automata. Languages of
Design, 2, 1994.
Reynolds, G. Flocks, Herds and Schools: A Distributed Behavioral Model.
In SIGGRAPH 87 Conference Proceedings, Anaheim, CA, 1987.
Schmeder, A., Freed, A., and Wessel, D. Best Practices for Open Sound
Control. In Linux Audio Conference, Utrecht, Holland, 2010.
Schwarz, D. A system for data-driven concatenative sound synthesis.
In Proceedings of the International Conference on Digital Audio Effects,
97-102, 2000.
Tidemann, A., Ozturk, P. Self-organizing Multiple Models for Imitation:
Teaching a Robot to Dance the YMCA. New Trends in Applied Artiicial
Intelligence, LNCS, 291-302, 2007.
Todd, S., Latham, W. Evolutionary Art and Computers. Academic Press,
1992.
Ulanowicz, R. Complexity, Stability, and Self-Organization in Natural
Communities. Oecologia (Berl.), 43: 295-298, 1979.
Shaping Microsound
Using Physical Gestures
xCoAx 2015
Computation
Communication
Aesthetics
and X
Gordan Kreković
Faculty of Electrical Engineering and Computing,
Zagreb, Croatia
[email protected]
Antonio Pošćić
Glasgow
Scotland
2015.xCoAx.org
Faculty of Electrical Engineering and Computing,
Zagreb, Croatia
[email protected]
Keywords: sound synthesis, parameter mapping, fuzzy logic,
dynamic stochastic synthesis.
This paper presents a system for controlling the structure of
synthesized sounds at the waveform level using physical gestures. The purpose of the system is to allow intuitive, natural,
and immediate interaction with a sound synthesis model based
on a non-standard synthesis technique. Instead of manipulating
numerical parameters which are, in case of non-standard synthesis, typically abstract and without acoustical meaning, musicians can shake a mobile device in order to shape the structure
of synthesized waveforms. The system receives raw data captured from accelerators, extracts relevant statistical features, and
maps them into parameters of a dynamic stochastic synthesizer.
Mapping is based on fuzzy logic in order to ensure a non-linear
and non-injective relation deined within explicit mapping rules.
Experimentation proves that the system provides natural, immediate, and expressive control which is convenient both in the composition process and in live performances.
201
1 Introduction
Sound synthesis using analog and digital electronic devices allows
composers to create novel sonorities that characterize their compositions uniquely. Controlling the timbre and its changes over
time is an important compositional aspect which provides coherence between musical form, structure, and material (Manousakis,
2009). Stockhausen emphasized that “every sound is the result of
a compositional act” (Stockhausen, 1963), while Di Scipio wrote
that “synthesis can often be thought of as micro-level composition” (Di Scipio, 1995) referring to the idea that sound synthesis allows composing timbers instead of just employing them in
higher-level musical structures (Brün, 2004).
Sound synthesis techniques particularly oriented to the microlevel composition and sound microstructure are non-standard
methods (Thomson, 2004). Instead of relying on theoretical
acoustical models, reproduction of actual sounds, or psychoacoustic phenomena, non-standard synthesis methods are based
on mathematical models and compositional abstraction (Holtzman, 1979). Such an approach allows composers to describe waveforms, their organization and transformation, without imposing
their acoustical consequences. Thereby, many compositional
aspects are reduced to controlling the sound synthesis process
and creating sounds at the waveform level.
Non-standard techniques, idiomatic to digital sound synthesis,
attracted the attention of researchers and composers, especially
in the 1970s. Even though most of the non-standard techniques
produce sounds by generating waveforms in the time domain,
several principally distinct approaches emerged: synthesis based
on rules (Berg, 1979; Berg, Rowe and Theriault, 1980, Brün and
Chandra, 2001; Holtzman, 1979), stochastic approach (Xenakis,
1992), fractal interpolation techniques (Yadegari, 1991; Monro,
1995; Dashow, 1996), and other approaches (Valsamakis and
Miranda, 2005; Collins, 2008).
Since non-standard synthesis techniques are not focused on
acoustical features of the synthesized sound, their controllable
parameters usually do not bear acoustical meaning. The parameters serve as abstract numerical inputs of mathematical models
for waveform generation. In order to achieve desired waveforms,
composers need to understand all the details of the applied synthesis model and its capabilities. While this is not a limiting factor
for composers who developed the synthesis models themselves,
the lack of intuitiveness may negatively affect the eficiency and
inspiration of those who do not use their own models. The process of shaping micro-sound should be closer to the way how
composers imagine sound structures. A more intuitive way of
202
controlling parameters of non-standard synthesis models would
allow composers to think within the musical domain during their
creative process. Additionally, such an approach would be more
convenient for applications in which immediate control is needed,
such as for live performances or interactive installations.
As a solution for intuitive control of non-standard synthesis
techniques, we propose a system for detecting physical gestures
and mapping them into sound synthesis parameters. Physical
gestures as a means for controlling the process of sound generation are already widely used in the area of dance-music intermodalities and applications related to enhancing musical content
by physical actions (Friberg, 2005; Heile, 2006). An advantage of
physical gestures for composing at the micro-level would be the
intuitive and immediate relation of natural movement with waveforms synthesized by non-standard models. Instead of manipulating numerical parameters, composers could achieve their ideas
and develop unique expressivity by experimenting with physical
movements. The system proposed in this paper extracts selected
features from the physical gestures and maps them into synthesis parameters. The purpose of this mapping is to achieve natural relations between gestures and the structure of synthesized
waveforms. Transitively, physical gestures may be also related to
the acoustical features of the synthesized sound, but only to the
extent to which synthesis parameters are related to acoustical
features.
The system for shaping microsound using physical gestures
described in this paper employs the dynamic stochastic synthesis as an underlying synthesis model. We selected this synthesis technique because it is purely parametric unlike some other
non-standard methods which require setting up rules or initial
states.
2 Dynamic Stochastic Synthesis
Before presenting the overall system, here is a short overview of
the employed synthesis model. Dynamic stochastic synthesis was
devised by Iannis Xenakis as a result of his ambition to achieve
uniied and simultaneous engagement on different time-scales
within the composition, from the overall structure of the composition to its microstructure and tone quality.
Dynamic stochastic synthesis generates samples by interpolating a set of breakpoints which change their amplitudes and positions in time stochastically. A breakpoint position is represented
relatively to the preceding breakpoint and it is commonly called
“breakpoint duration”. Initial amplitudes and durations are usually chosen randomly or taken from a trigonometric function.
203
At every repetition of the waveform, these values are varied
independently of each other using random walk. That means that
both the amplitude and the duration of a certain breakpoint are
changed by adding random steps to the values in the previous
cycle as shown in Figure 1. A succession of random steps applied on
all breakpoints causes the continuous variation of the waveform.
The amount and character of the variation depend on a selected
probability distribution and its parameters. Both amplitude and
duration random walks are limited each with two relecting barriers which bounce excessive values back into the predeined range.
Fig 1. Breakpoints change their
positions from one repetition to
another. Light blue circles in the
second represent positions from
the irst cycle, while darker circles
represent new positions.
These barriers prevent breakpoints from straying too far from
their initial positions and therefore enable control over amplitude
and frequency ranges of the overall waveform.
Parameterization of the synthesis model is achieved through:
(1) the number of breakpoints in a waveform, (2) barriers of the
amplitude random walk, (3) probability distribution of the amplitude random walk and its parameters, (4) barriers of the duration
random walk, and (5) probability distribution of the duration random walk and its parameters. The amplitude barriers provide control over the amplitude range of the generated waveform, while
the duration barriers deine minimal and maximal number of
samples between two breakpoints. If changes in amplitude and
duration between successive repetitions are small, the synthesized sound is relatively simple, but it can have interesting modulation effects. On the other hand, as changes become more prominent, the sound becomes more complex and noisier.
Detailed explanations of the synthesis model can be found in
(Serra, 1993) and (Luque, 2009), while several other researchers
proposed extensions of the original algorithm (Hoffman, 2000;
Brown, 2005; Young, 2010). Since the irst implementation by Iannis Xenakis did not provide any means of controlling the synthesis process, some authors suggested interface designs for direct
parameter control (Hoffman, 2000; Bokesoy and Pape, 2003;
Brown, 2005). An interesting solution was also a mobile application which obtained parameters from multi-touch gestures and
204
accelerometers (Collins, 2011). In all of the mentioned approaches,
the values from controllers or sensors were directly mapped into
synthesis parameters which remained transparent to composers.
In order to hide numerical parameters, in our previous research we
proposed intuitive control by an input audio signal (Kreković and
Brkić, 2012) and using MIDI messages (Kreković and Petrinović,
2013).
In this paper we present a novel approach focused on using
physical movements for controlling dynamic stochastic synthesis
and thereby for the intuitive shaping of synthesized waveforms.
The following chapters describe the system design, experiments,
and results yielded within this research.
3 System Overview
As has been mentioned previously, the main goal of this interactive system is to provide a simple and intuitive method for shaping waveforms by shaking a mobile device. Therefore, the central
problem of the research is establishing natural mappings between
the shaking gestures and the synthesis parameters.
The irst step was to choose features of the shaking gesture
which could be extracted from the raw physical movements. Since
the intention was to establish a relation between the features
and the waveform structure, we searched for lower-level features
which could not bear meaning, symbols, or metaphors. The information used for shaping microsound should be contained at the
phenomenological level of the physical movement and not on the
symbolic level. To cover cinematic, frequential, and spatial phenomenological aspects of physical movements we selected the
following features as relevant for our study: intensity, frequency,
and shaking direction.
In order to extract the aforementioned features using a computer
system, the prerequisite is to capture raw physical movements
and represent them as a stream of numbers. That functionality
is available in a mobile application which serves as a controller.
As the user shakes the mobile device, the mobile application captures data from accelerometers and sends them to the server side.
The communication between the application and the server side
is based on the Open Sound Control (OSC) protocol which is a
widely-used and standardized protocol for networking sound synthesizers, computers, and other multimedia devices. This way, we
can use any existing mobile application that can read data from
accelerometers and form appropriate OSC messages. Besides the
compatibility with many existing iOS and Android applications for
smartphones, the implementation of the OSC protocol also opens
opportunities for running client applications on different types
205
of devices such as smart watches and other wearable devices with
accelerometers. There are countless possibilities when the use of
such devices in performances is concerned.
The server-side application receives the raw data from accelerometers, extracts relevant features, maps those features into
sound synthesis parameters, and inally produces an audio signal using dynamic stochastic synthesis. The mapping between
features of the shaking gestures and synthesis parameters is a
non-linear and non-injective mapping based on rules which can
be elicited from knowledge of a human expert. To implement such
a mapping we opted for an expert system based on fuzzy logic.
The overall system architecture with corresponding data low is
Fig 2. Overall system architecture
and data low.
shown in Figure 2.
3.1 Movement Analysis
Raw data from accelerometers represents instantaneous accelerations of the mobile device and do not quantify physical gestures
directly. However, there are higher-level features extracted from
the raw data that can better describe the nature of movements.
Since the dynamic stochastic synthesis produces rich and complex sounds with organic quality, we opted for shaking movement
as a gesture which can, to some degree, metaphorically represent
the waveform structure and its acoustical qualities.
The irst feature, which represents the cinematic phenomenological aspect of the physical gesture, was the shaking intensity.
The intensity is calculated as the root mean square (RMS) of the
acceleration changes for all three axes:
where ax[n], ay[n], and a z[n] represent discrete values of acceleration respectively in the n-th step at the axes x, y, and z. In order
206
to smooth the spikes, a running average ilter is applied to the
calculated root mean square.
The second feature we selected was the measure of how fast the
user shakes the device. We called this feature “shaking frequency”.
This measure is calculated based on the number of zero-crossings.
Each time that the motion of the device changes direction, the
sign for acceleration on some axes changes as well. Therefore, the
number of crossings through the zero value can be approximately
correlated with the frequency.
The purpose of the third and inal feature is to quantify how
complex the shaking movement is. For simplicity’s sake, we call
this measure “shaking direction”. If the user shakes the device
just along one axis (e.g. up and down), this feature has a low value,
but if the user makes loops and changes directions very often, the
feature will have higher values. The measure is based on the maximal absolute difference between the acceleration changes on two
axes in the same moment. A running average ilter is again used
to smooth spikes.
3.2 Fuzzy Mapping
The higher-level features are mapped into sound synthesis parameters according to desired relations to the synthesized waveforms.
The intensity is intended to correlate with the amplitude and
the structural complexity of the generated waveform. More vigorous movements of the device should cause higher amplitudes
of the synthesized signal and more prominent changes of breakpoint positions. The shaking frequency is intended to be related
to the frequency range and frequency drifts of the synthesized
waveform. It should, consequently, affect the impressions of pitch
and timbral lux. Faster movements of the device should result
with smaller duration limits, greater frequency drifts, and shorter
waveform cycles. Finally, the shaking direction is intended to control the dynamicity of the amplitude and the frequency changes
of the waveform. Simpler movements of the device should cause
steadier waveforms and simpler timbres, while loops and sudden
changes of the shaking direction should cause faster developments of the waveform and thereby more complex timbres.
An expert system based on fuzzy logic was chosen as the most
convenient solution for mapping gestural features into synthesis
parameters. Fuzzy logic is a form of probabilistic logic which supports the concepts of partial truth and linguistic variables (Zadeh,
1965). This is suitable for quantifying imprecise information and
making decisions based on incomplete data (Kosko, 1993).
Mappings between gestural features and synthesis parameters
are described by fuzzy rules with linguistic variables. An example
207
of a linguistic variable is the “intensity”, whilst its linguistic
terms are “low”, “medium”, and “large”. Inputs in a fuzzy logic
system are usually numeric, so it is necessary to convert these
numeric values into linguistic terms. An input value can partially
satisfy several linguistic variables at the same time. For example, if the feature has a value of 0.3, the “intensity” is somewhere
between “low” and “medium”.
Fuzzy rules are speciied in the form of IF-THEN statements:
IF (x1 IS S1) AND/OR ..., (xn IS Sn) THEN y IS T
where x i represents input fuzzy variables, y is the output variable,
while Sn and T stand for input and output linguistic terms. The
irst step of applying the fuzzy model is to convert input variables
into fuzzy logic variables. Then, output variables are calculated
by evaluating the rules and the output values are converted to
numeric form.
Fuzzy logic enables nonlinear many-to-many mappings
between gestural features and synthesis parameters, while the
rules based on linguistic variables can be easily understood and
speciied by composers. Because of these properties, systems
based on fuzzy logic have been previously used in the musical
domain for coding musical gestures (Orio and De Prio, 1998), analyzing the emotional expression in music performance (Friberg
2004), mapping between visual and aural information (Cádiz,
2006), sound synthesis (Miranda and Maia, 2005; Cádiz and Kendall 2005), and several other applications.
The fuzzy logic model speciies input and output variables,
membership functions, fuzziication and deffuziication methods,
and mapping rules for the expert system based on fuzzy logic. In
our implementation, the fuzzy logic model can be speciied using
Fuzzy Control Language (FCL). This language is standardized
by the International Electrotechnical Commission standard IEC
61131-7. The fuzzy rules have an intuitive IF-THEN form which
allows musicians to modify and write new rules by themselves.
The fuzzy logic model used for this research accepts three features extracted from the raw data received from the mobile application. The fuzzy logic model has 5 outputs which represent values of sound synthesis parameters. All the membership functions
used in the fuzzy model have a triangular form. To defuzzify output variables, the fuzzy logic model uses a technique based on the
center of gravity which is the typical approach for models with
real-valued output variables.
The rule set for calculating audio features consists of 30 rules
which were manually written and adjusted after several iterations
of subjective testing by the authors. Here are some examples of
the rules:
IF frequency IS little THEN durationUpperLimit IS small;
208
IF direction IS prominent THEN amplitudeVariation IS large;
IF intensity IS moderate THEN amplitudeLimit IS moderate;
3.3 Implementation
The server-side was implemented using Pure Data, a visual programming environment for music and multimedia projects. The
expert system based on fuzzy logic was implemented as a Pure
Data external component which served as an interface between
Pure Data and the jFuzzyLogic library (Cingalogani and Alcalá
Fernández, 2012). Since jFuzzyLogic was written in Java, wrapper
functions that rely on JNI calls have been implemented to serve
as glue code between the main functions of the external written
in C and the Java library.
4 Results
Experimentation with the system has shown that the intended
mapping between physical gestures and the synthesis parameters has been achieved in accordance with initial requirements.
Since we implemented a priori knowledge about synthesis parameters within the fuzzy model, physical gestures are also intuitively related to acoustical results. Stronger shaking causes more
complex and louder sound, faster shaking produces higher average pitch with an increase in frequency drifts, while changes in
the direction of shaking also affect the synthesized timbre. The
overall impression of the sound is transparently and immediately
related to physical gestures. Figure 3 shows data captured during
Fig 3. Data captured during a period
of 11 seconds. The top chart shows
features extracted from physical
movements, while the second and the
third charts show values of synthesis
parameters produced by the expert
system based on fuzzy logic. Pictured
at the bottom are the waveform and
the spectrogram of the resulting
sound.
one experiment and proves the established relations between
209
selected features of the shaking movement, sound synthesis
parameters, and the acoustical qualities of the synthesized sound.
The expressivity of this interface is satisfying. By combining
different shaking intensities, frequencies, and directions, various timbral results can be achieved, from simple steady sounds
to buzzy and noisy timbres which are characteristic of dynamic
stochastic synthesis. However, mapping three movement features
into ive sound synthesis parameters meant deliberate and necessary limitations of the expressivity when compared to direct
parameter control. Additionally, the selected features of shaking
movements are not completely independent. For instance, it is
dificult to increase the shaking intensity without increasing the
shaking frequency. The consequence of such a dependency is that
sounds with both low pitch and complex timbral texture cannot
be easily achieved. However, the expressivity of the synthesized
sound corresponds to the expressivity of the shaking movement,
so we believe that users will not be able to notice those missing
aspects of expressiveness, unless they have a lot of experience
with dynamic stochastic synthesis and strict expectations before
they start using the system.
5 Conclusions and Future Direction
As a solution for intuitive and immediate sound shaping at the
micro-level, we proposed a system for mapping physical gestures
into parameters of dynamic stochastic synthesis. The beneit of
the proposed approach in comparison with direct parameter control is that it is straightforward and does not require deep understanding of the underlying sound synthesis model. This system
is also convenient for live performances in which transparent
mapping between movements and sound can be exploited in
many ways. Unlike other similar systems for controlling synthesis parameters, this one is particularly focused on a non-standard
synthesis method and therefore can be observed as an approach
for shaping microsound using physical gestures.
From this point on, future research can continue in two different directions. The irst one is achieving even closer connections
between movements and the microsound by developing a new
non-standard synthesis technique which would directly rely on
nuances of physical movements instead on stochastic processes.
Such a synthesis method would be interesting for dancers and
choreographers who could explore the links between microstructures and acoustical features of synthesized sounds and dance
movements.
The other research direction would be completely different. Instead of subordinating a synthesis model to the nature of
210
movement, the system for gesture detection could be extended to
understand a much larger vocabulary of complex gestures. That
way, gestures could be used as symbols and metaphors for triggering various modes of sound synthesis. As a result, higher expressivity could be achieved with interesting results in the context of
dance-music intermodalities.
To conclude, the results of this research are generally encouraging with regards to our intention to develop a system for controlling the sound microstructure with physical movements. The
proposed approach can be employed to control dynamic stochastic synthesis more easily and effectively, it can be used in live performances, and it can serve as a base for future research.
References
Berg, Paul. 1979. “PILE – A Language for Sound Synthesis”. Computer
Music Journal 3(1), pp 30-37
Berg, Paul, Rowe, Robert, and Theriault, David. 1980. “SSP and Sound
Description”. Computer Music Journal 4(1), pp 25-35
Bokesoy, Sinan and Pape, Gerard. 2003. “Stochos: software for Realtime Synthesis of Stochastic Music”, Computer Music Journal 27(3), pp
33-43
Brown, Andrew. 2005. “Extending Dynamic Stochastic Synthesis”.
International Computer Music Conference, Barcelona, Spain
Brün, Herbert. 2004. When Music Resists Meaning, chapter From Musical
Ideas to Computers and Back. Wesleyan University Press
Brün, Herbert and Chandra, Arun. 2001. A Manual for Sawdust.
Accessed December 30, 2014. http://academic.evergreen.edu/a/arunc/
brun/sawdust/sawdust.htm
Cádiz, Rodrigo.2006.“A Fuzzy-Logic Mapper for Audiovisual Media”.
Computer Music Journal 30(1), 67–82.
Cádiz, Rodrigo and Kendall, Gray. 2005. “A Particle-Based Fuzzy Logic
Approach to Sound Synthesis”. Paper presented at the Conference on
Interdisciplinary Musicology, Montreal, Canada
Cingolani, Pablo and Alcalá Fernández, Jesús. 2012. “jFuzzyLogic:
A Robust and Flexible Fuzzy-Logic Inference System Language
Implementation”. Paper presented at the 2012 IEEE International
Conference on Fuzzy Systems, Brisbane, Australia, pp 1-8
Collins, Nick. 2008. “Errant Sound Synthesis”. Paper presented at the
International Computer Music Conference, Belfast, UK
Collins, Nick. 2011. “Implementing Stochastic Synthesis for
SupperCollider and iPhone”. Paper presented at the Xenakis
International Symposium, London
211
Dashow, James. 1996. “Fractal Interpolation”. Computer Music Journal
20(1), pp 8-10
Di Scipio, Agostino. 1995. “Inseparable Models of Materials and of
Musical Design in Electroacoustic and Computer Music”. Journal of New
Music Research 24(1), pp 34-50
Friberg, Anders. 2004. “A Fuzzy Analyzer of Emotional Expression in
Music Performance and Body Motion”. Paper presented at the Music and
Music Science, Stockholm, Sweden
Friberg, Anders. 2005. “Home Conducting – Control the Overall Musical
Expression with Gestures”. Paper presented at the International
Computer Music Conference, Barcelona, Spain
Heile, Bjorn. 2006. “Recent Approaches to Experimental Music Theatre
and Contemporary Opera”, Music and Letters 87(1), pp 72-81
Hoffmann, Peter. 2000. “The New GENDYN Program”. Computer Music
Journal 24(2), pp 31-38
Holtzman, Steven R. 1979. “A Description of an Automated Digital Sound
Synthesis Instrument”. Computer Music Journal 3(2), pp 53-61
Kosko, Bart. 1993. Fuzzy Thinking. The new science of fuzzy logic. New York:
Hyperion
Kreković, Gordan and Brkić, Igor. 2012. “Controlling Dynamic
Stochastic Synthesis with an Audio Signal”. Paper presented at the
International Computer Music Conference, Ljubljana, Slovenia
Kreković, Gordan and Petrinović, Davor. 2013. “A Versatile Toolkit
for Controlling Dynamic Stochastic Synthesis”. Paper presented at the
Sound and Music Computing Conference, Stockholm, Sweden
Luque, Sergio. 2009. “The Stochastic Synthesis of Iannis Xenakis”.
Leonardo Music Journal, 19, pp 77-84
Manousakis, Stelios. 2009. “Non-standard Sound Synthesis with
L-systems”. Leonardo Music Journal 19, pp 85-94
Monro, Gordon. 1995. “Fractal Interpolation Waveforms”. Computer
Music Journal 19(1), pp 88-98
Orio, Nicola and De Piro, Carlo. 1998. “Controlled Refractions: Two
Levels Coding of Musical Gestures for Interactive Live Performances”.
Paper presented at the International Computer Music Conference, Ann
Arbor, Michigan, USA
Serra, Marie-Hélène. 1993. “Stochastic Composition and Stochastic
Timbre: GENDY3 by Iannis Xenakis”. Perspectives of New Music 31(1),
pp 236-257
Stockhausen, Karlheinz. 1963. Texte zur elektronischen und instrumentalen
Musik. Verlag M. DuMont Schauberg
Thomson, Phil. 2004. “Atoms and Errors: Towards a History and
Aesthetics of Microsound”. Organized Sound 9(2), pp 207-218
Valsamakis, Nikolas and Miranda, Eduardo R. 2005. “Extended
Wave-form Segment Synthesis, a Non-standard Synthesis Model for
Composition”. Paper presented at the Sound and Music Computing
Conference, Salerno, Italy
212
Xenakis, Iannis. 1992. Formalized Music. Stuyvesant, NY: Pendragon Press.
Yagaderi, Shahrokh David. 1991. “Using Self-Similarity for SoundMusic Synthesis”. Proceedings of the International Computer Music
Conference, Monetral, Canada
Young, Jonathan. 2010. “Rethinking Synthesis: Extending and Exploring
Gendyn”. BA thesis, University of Sussex: Department of Informatics.
Zadeh, Loti A. 1965. “Fuzzy sets”. Information and Control 8, pp338–353
Relections on Live
Coding Collaboration
xCoAx 2015
Computation
Communication
Aesthetics
and X
Alex McLean
ICSRiM, School of Music, University of Leeds, UK
[email protected]
Keywords: live coding, algorave, jazz improv, collaboration
Glasgow
Scotland
2015.xCoAx.org
Relections on a number of live coding collaborations with improvisors, choreographers and performance artists, drawing from
informal discussion and audience feedback.
214
1 Introduction
Through my practice as a live coder of music, I have enjoyed varied collaborations with percussionists, live artists, performance
artists, dancers and choreographers, as well as other live coders.
In the following short paper I will relect on a number of these
collaborations, including with/within algorave, live Jazz improv,
performance art and choreography practice. Some focus will be
on the role of time and language, as core themes in live coding,
but I will also consider wider cultural issues, and of the role of collaboration in making live coding meaningful. I conclude by considering how technology could better support close collaboration
in the future. Throughout, informal reports by collaborators and
audience members are drawn from, as well as relections as a live
coding performer.
2 Making collaboration visible:
Inter- and intra-technology
When performing with technology on stage, there can be a lingering feeling that some aspect of the performance is invisible. Slub
have projected screens since inception in the year 2000 (Collins
et al. 2003), a habit which has been taken up by the live coding
community at large (Ward et al. 2004). Slub consists of Adrian
Ward, Dave Grifiths and myself in various combinations (Fig. 1
shows Grifiths and McLean), but our collaboration does not take
place in our technology, but through the musical and sonic structures we produce. We do make a network on stage, but this is
only to create a shared clock so that we may coordinate tempo
changes, and share the same down beat. Our systems are otherwise decoupled, our collaboration being between our different
Fig. 1 Slub live coding at the Old
Operating Theatre London, 14th
January 2010. Photo: Evan Raskob
215
systems rather than through the same system. This is not clear to
all audience members however, who through informal post-performance discussion have occasionally revealed an assumption
that we are working on different parts of a technological machine,
rather than working on our own machines and collaborating as
musicians within a laptop ensemble.
While holding correct assumptions of the mechanics of a performance is not always important to an audience member’s appreciation of a piece, there is one aspect which I consider critical;
how the audience perceives balance between performers. I collaborate with instrumentalists and dancers as equals, as experienced
improvisors with equal technical abilities over our instruments,
languages and/or our bodies. The question is, how can such
collaborations be staged to get their nature across, as balanced
exchanges between two or more creative individuals? Relecting
the general role of computation in culture, an audience member’s
assumption might be that the laptop operator is somehow controlling the other performer, or at least processing their sound or
movements in some way. Another assumption might be that the
laptopist is carrying out mundane operations, while an instrumentalist or dancer is contributing the real creativity to the performance through ‘authentic’ gesture.
Fig. 2 Sound Choreographer <>
Body Code, Audio:Visual:Motion
Manchester, March 2013.
Photo: MIRIAD
Of course in many technology oriented performances, such
assumptions as described above are actually true, and great
imbalance between laptopist and a more ‘physical’ collaborator is
not always seen as an important artistic consideration. However,
the collaborations I have taken part in have always looked for balance. Kate Sicchio and I are developing a live code and live choreography performance as a conluence of our practices, setting up
a feedback loop between choreography, the body, code, sound and
back into choreography (see Fig. 2, and McLean and Sicchio 2014).
We are ambivalent about the success of this piece, our experience
as performers connecting our two notations has at times been very
216
good, but the physical strain placed on Kate on her side of the loop
led one audience member to report feeling that I (as programmer)
was torturing Kate (as dancer). In the piece, Kate’s movements
interfere with my code, but any torture felt by me is solely cognitive, and so less visible. Kate is herself a technologist as well
as (and indeed as part of) being a choreographer, and has been
instrumental in the recent conceptual development of live coding,
but it can be dificult to get the nature of our collaboration, as
an exchange between relective technologists, across. Where we
have agreed our performance has really worked, is where we have
explained and discussed it irst.
As an aside, this work carries a key problem when experimenting with collaborative performance; such performances are set up
to fail; ideas collide and we learn from the pieces. All we can really
do is embrace the risk, and hope that audience members perceive
some of the possibility that we are reaching for, and often miss.
Fig. 3 Hession/McLean duo practice
session, Leeds, 2014.
Photo: Paul Hession
1 See http://canute.lurk.org/ for
information about and recordings
of Canute.
Returning to the question of audience perception; how can
collaboration through body and code be made more visible? One
collaboration with live coder and drummer Matthew Yee-King as
Canute1 looks for ways of sharing data between an instrumentalist and live coder. Matthew produces probability distributions of
hits on his drum kit, visualising them and sending them to me
as Tidal patterns (McLean 2014), which I then transform through
live coding with further visualisation. Six performances in, audience response has been increasingly positive in terms of encores
and dancing, although perhaps more responding to the musical
end result, and less on the conceptual basis of the work which is
only visualised in the abstract.
A more directly interventionist approach has been found in
collaboration with performance artist Susanne Palzer. Susanne
curates a series of “OPEN_PLATFORM” happenings based on the
idea of “Technology without Technology”, exploring notions of
217
Fig. 4 Binary Transmission, Palzer
and McLean, Access Space Shefield,
6th December 2013. Photo: Susanne
Palzer
Fig. 5 On-Gaku, Palzer and McLean,
Wharf Chambers, 25th January 2015.
Photo: Rodrigo Velasco
digital art outside of the normal frame. She has developed a series
of performance pieces where she steps on and off a (wooden) platform, sometimes with lights also switching on and off, exploring
the digital in performance. We have collaborated on two performances so far, in the irst “Binary Transmission” (Access Space
Shefield, 6th December 2013; Fig. 4), I knitted while Susanne
stepped on and off and around her wooden platform, a knit for
every on, and a pearl for every off. In this way her discrete, binary
movements were transduced into the binary pattern of fabric. In
our second collaboration “On-Gaku” (Bloc Studios Shefield, 12th
July 2014; Wharf Chambers Leeds, 25 January 2015, Fig. 5), I used
a laptop rather than knitting needles, and did my usual live coding
with Tidal. However, we hooked up a pressure sensor to Susanne’s
platform, so that my screen was only projected while she stepped
‘on’. I worked using a wireless keyboard, and using the projection
as my screen, so had to cope with only seeing the code I was editing for leeting moments. In joining our individual practices in
this way, our dificulty was more visible on both sides. In my case,
I struggled to work as I could not see my screen for most of the
time, and in Susanne’s, her physical exertion was clear.
It is perhaps telling that collaborations I am involved in often
end up looking for ways of balancing dificulty and friction in
interwoven performance practice, by deliberately introducing new
dificulties and struggles. This works well within a performance
art context. It is worth noting however that my musical collaborations with instrumentalists, including collaborations described in
the following section, are far less troubled in terms of the nature
of collaboration. When the collaboration is on the shared basis of
sound, technology has less of a bridging role, and therefore has
218
less of an overbearing inluence on audience reception of a piece.
However, none of this is to say that our technology should in any
way become invisible or seamless.
3 Percussion - generation at the speed of
gesture, and freedom from the grid
A primary motivation for the development of Tidal over the years
has been collaboration with percussionists. This began with a
number of sessions and performances with drummer and digital
artist Alex Garacotche in 2004, including the Ultrasound festival
in Huddersield. At the time I was using the feedback.pl editor for
live coding with the Perl programming language, which included
an interesting user interface application for self-modifying code.
However, it was unwieldy, and when live coding “from scratch”,
I might be a minute into a performance before I started making
sound.
By switching to the Haskell programming language I have
been able to develop Tidal as an embedded Domain Speciic Language (eDSL), for composing pattern as higher order structures
with highly economical syntax (McLean 2014). This allows me to
respond to changes introduced by co-performers within seconds.
As well as speed of reaction, it has also been important to develop
an expressive approach to time. While 16 step dance music is a
passion of mine, Tidal allows me to quickly express complex metric subdivisions, and layering time signatures on top of each other
to create shifting polyrhythms. Tidal represents time using rational numbers, and patterns as functions rather than sequences, in
a highly lexibile manner.
Tidal is certainly not without its constraints, but the freedom
which this representation of time offers me has allowed me to
collaborate within free improvisation. My primary exploration in
this area has been with drummer Paul Hession, who has honed
his practice over decades, including through collaborations and
more recently solo play. Paul has now extended his drum kit with
a range of analogue, digital and physical techniques, and interestingly has explored collaborations with unsupervised ‘live algorithms’ alongside his occasional work with me as live coder (see
Fig. 3, and Hession and McLean 2014). On relection, these performances have centred on struggle with continual change.
My conclusion here is that while code necessarily distances the
live coding musician from the physical production of sound, live
coding technology, including my own, has succeeded in reducing latency between action and reaction close to the speed of
gesture. This in turn has allowed myself as a live coder to collaborate closely with live instrumentalists, including in free jazz
219
situations. In this sense, live coding has genuinely brought programmers closer to the people around them.
4 Community growth and genre
Collaboration in music extends beyond co-performers, but also
with audience members, and in the broad Musicking sense (Small
1998) where every activity around music culture is seen to be part
of music-making. There is an argument that music has become
formulaic and backward-looking over the past decade, lacking
revolutions comparable to rock ‘n roll and rave in the late 20th
century (Fisher 2014). It is too early to say whether live coding
will have real cultural resonance as an agent for change in mass
media, but perhaps there is some potential shown in the media
reaction to Algorave music (e.g. Cheshire 2013).
Algorithmic music has been present in dance music culture for
some time, but Algorave has provided a new common ground for
us to explore together (Collins and McLean 2014). Algorave is a
collaboration without clearly deined edges, a space initially created by live coders such as Nick Collins, Dan Stowell, Matthew
Yee-King and myself, and (I think crucially, in terms of establishing identity) graphic designer David Palmer. Creating this space
has in some respects been janitorial, helping shape that identity
in the background, while leaving space for organisers, performers
and (perhaps most importantly) revellers to deine what Algorave
really means. What started as a joke of sorts has become unexpectedly successful - many people across the world (e.g. UK, Mexico, Australia, Germany, Peru, Belgium, Canada) have felt able to
make Algoraves for their own, without asking anyone for permission. Some have been organised by practitioners and professional
promoters, and quite a few within academic conferences, making
an ad-hoc collaboration which spans research and practice.
5 Closer
I would argue that live coding is now proven as a reasonable means
to make music, both within small engaged live coding communities, and within larger enthusiastic, dancing audiences in the hundreds. Perhaps the next leap is to see how live coding can bring
us closer together, and unearth modes of interaction that could
take us further away from software engineering, towards closer
shared experience of code. From the perspective of music technology, the most recent leaps in shared programming environments are a decade old; the Republic live coding environment for
SuperCollider (Rohrhuber et al. 2007), and the Reactable tabletop
instrument (Jordà et al. 2005). The former explores conversational,
220
shared live coding style, and the latter simultaneous editing of a
sonic datalow network by collaborators around a circular table.
My feeling is that a further leap is overdue, and the results could
take live coding further away from the well established applications
for programming languages, into radically different ones. In particular, environments aimed at creative, shared exploration through
abstraction, and at shared experience rather than end results.
References
Cheshire, Tom. 2013. “Hacking Meets Clubbing with the ’Algorave’.”
Wired Magazine (UK) (September): 85+.
Collins, Nick, and Alex McLean. 2014. “Algorave: a Survey of the
History, Aesthetics and Technology of Live Performance of Algorithmic
Electronic Dance Music.” In Proceedings of the International Conference
on New Interfaces for Musical Expression.
Collins, Nick, Alex McLean, Julian Rohrhuber, and Adrian Ward.
2003. “Live Coding in Laptop Performance.” Organised Sound 8 (03):
321–330.
Fisher, Mark. 2014. Ghosts of My Life: Writings on Depression, Hauntology
and Lost Futures. Paperback; Zero Books.
Hession, Paul, and Alex McLean. 2014. “Extending Instruments with
Live Algorithms in a Percussion / Code Duo.” In Proceedings of the 50th
Anniversary Convention of the AISB: Live Algorithms.
Jordà, Sergi, Martin Kaltenbrunner, Günter Geiger, and Ross Bencina.
2005. “The reacTable.” In Proceedings of the International Computer
Music Conference (ICMC) 2005, 579–582.
McLean, Alex. 2014. “Making Programming Languages to Dance to:
Live Coding with Tidal.” In Proceedings of the 2nd ACM SIGPLAN
International Workshop on Functional Art, Music, Modelling and Design.
McLean, Alex, and Kate Sicchio. 2014. “Sound Choreography <>
Body Code.” In Proceedings of the 2nd Conference on Computation,
Communcation, Aesthetics and X (xCoAx), 355–362.
Rohrhuber, Julian, Alberto de Campo, Renate Wieser, Jan-Kees van
Kampen, Echo Ho, and Hannes Hölzl. 2007. “Purloined Letters and
Distributed Persons.” In Music in the Global Village Conference 2007.
Small, Christopher. 1998. Musicking: the Meanings of Performing and
Listening (Music Culture). First edition. Paperback; Wesleyan.
Ward, Adrian, Julian Rohrhuber, Fredrik Olofsson, Alex McLean,
Dave Grifiths, Nick Collins, and Amy Alexander. 2004. “Live
Algorithm Programming and a Temporary Organisation for Its
Promotion.” In Read_Me — Software Art and Cultures, edited by Olga
Goriunova and Alexei Shulgin.
xCoAx 2015
Computation
Communication
Aesthetics
and X
Glasgow
Scotland
2015.xCoAx.org
Digital Symbiosis:
The Aesthetics
and Creation of
Stimulus-Reactive Jewellery
with Smart Materials
and Microelectronics
Katharina Vones
University of Dundee, Dundee, UK
[email protected]
Keywords: Smart Materials, CAD, Posthuman, Microelectronics,
Jewellery, Thermochromic, Arduino
This article explores how smart materials, and in particular
thermochromic silicone, can be integrated into a wearable object in
combination with microelectronics to create aesthetically coherent stimulus-reactive jewellery. The different types and properties of thermochromics are discussed, including experiments with
layering pigments that react at different temperatures within
three dimensional silicone shapes. The concept of creating digital
enchantment through playful interaction is introduced, illustrating how accessible microelectronics can be used to facilitate the
creation of responsive jewellery objects. Bringing together digital
methods of fabrication with craft methodologies to create objects
that respond intimately to changes in the body of the wearer and
the environment is presented as an outcome of this research project. Moving towards the notion of a posthuman body, potential
practical applications for these jewellery objects exist in the areas
of human–computer interaction, transplant technology, identity
management and artiicial body modiication, where such symbiotic jewellery organisms could be used to develop visually
engaging, multifunctional enhancements.
222
1 Introduction
The idea of creating a jewellery organism that comes alive on the
body has fascinated and inspired my research ever since learning about the potential of smart materials to generate vitality in
static objects almost twelve years ago (Saburi 1998). While smart
materials have been known to scientists for far longer, and have
been used to great effect in engineering and aeronautic applications as actuators, their use in contemporary art and craft has
been sporadic, most likely because of the challenges posed in
processing and shaping them. With the increased prevalence of
digital technologies in our everyday lives, the questions posed to
the contemporary craft practitioner regarding the creation of a
more reined interaction between the digitally enhanced object
and its wearer have become progressively more prominent in
the applied arts (Wallace 2007). Through examining the notion
that human biology is a part of material culture, where the body
can be shaped, customised or altered through surgical intervention and scientiic innovation, my research explores how recent
developments in material science and wearable technologies can
be viewed as moving towards a future embracing the posthuman
body, bridging the gap between craft practitioner and scientiic
discovery (Hayles 1999). Developing a holistic approach-whereby
material experimentation and digital production processes are
used to facilitate the development of aesthetically and biologically integrated wearable technologies, is the goal my research
moves towards. More immediately-however, I am challenging the
perception of smart materials and their application within the
ield of contemporary jewellery-in both an artistic and scientiic
context-through proposing the development of symbiotic stimulus-reactive jewellery organisms.
Taking David Rose’s concept of the enchanted object (Rose
2014) and playful interactions as a starting point, my research
addresses aesthetic considerations alongside functionality, thus
developing material and technological solutions that constitute
an integrated and functional yet uniied part of the jewellery
object as a whole. While previous projects have placed a strong
emphasis on simply creating receptacles to accommodate electronic components within a wearable object, the possibilities
offered by digital manufacturing technologies such as rapid-prototyping and computer aided design (CAD) have expanded the
aesthetic vocabulary available to the practitioner. Furthermore,
the development and increasing availability of a range of stimulus-reactive smart materials, in addition to the progressive miniaturisation of electromechanical components, has turned the prospect of developing jewellery objects that appear to be responsive
223
to their environment, yet depend closely on an interaction with
the physiology of the wearer’s body to stimulate these responses,
from a distant imagining into a feasible goal.
2 Exploring the Future – Smart Materials
I initially became aware of a group of smart materials known as
Thermochromics through a presentation given by Dr. Sara Robertson at the CIMTEC 2012 conference in Montecatini Terme, Italy,
exploring the potential of temperature-sensitive thermochromic
dyes and heat-proiling circuits in textile design (Robertson 2011)
. Intrigued by their ability as a smart material to respond directly
to a change in body temperature through colour change, I began
to explore their potential in combination with the three dimensional silicone shapes I had been developing. Thermochromics
are commonly available as either dye slurry or in powdered pigment form, and fall into the two main categories of leuco or liquid
crystal thermochromics. Either variety is available in a range of
colours and with different temperature change points, displaying a visible colour change with an increase or decrease in exposure temperature. Leuco dyes change from pigmented to colourless when a heat or cold source is applied, depending on their
change temperature, and assume pigmentation again as soon as
the source of temperature change is removed. Analogue Liquid
Crystal dyes cycle through a set of colours that correspond to
the temperature they are exposed to, with the most recognisable
form being the ‘peacock’ colour pallet ranging from red through
yellow, green and deepening shades of blue. After a certain peak
temperature is reached towards the dark blue spectrum, usually
about 20 degrees above activation temperature, visibility of the
pigment ceases and only returns in the cooling phase when it
cycles through the previous colour shifts in reverse until it once
more falls below its activation temperature. Digital Liquid Crystal technology, in which the pigment appears to be either in an
‘on’ or an ‘off’ state according to the temperature it is exposed to,
has also recently become available. The colour change reactions
of thermochromic dye systems are available as reversible and
irreversible types. However, as one of the deinitive conditions of
smart materials is full reversibility, only the former type can be
categorised as such and is of interest to me in this respect.
There are a variety of practical and industrial applications for
thermochromic pigments, dyes and paints. One of the most well
known is the inclusion of liquid crystal technology in forehead
thermometers, where each degree of measured body temperature is assigned a corresponding colour. Similarly, Leuco dyes are
widely used in fuel assemblies, to test combustion engines and as
224
Figs. 1 & 2 Example of a Single
Pigment Test: Blue 27˚C in its
unchanged and changed state
Figs. 3 & 4 Example of a Dual
Pigment Test: Blue 27˚C and Yellow
38˚C in its unchanged and changed
state
Figs. 5, 6 & 7 Example of the
progressive stages of change in a
dual pigment sample of Magenta 41˚C
and Yellow 38˚C
friction markers in engineering, effecting an irreversible colour
change when heated and thus signalling a state change of the
monitored component (Robertson 2011). My research currently
focuses on exploring the potential of layering leuco and liquid
crystal pigments in silicone to explore the interplay of colours
created by different colour and temperature combinations. I have
adopted a rigorous testing protocol for these experiments, starting with four base pigments in different colours and each with a
different change temperature (Blue 27˚C, Yellow 38˚C, Magenta
41˚C, and Red 47˚C). Each batch of samples is made using the same
process, requiring 16.5g of mixed silicone for a full set of 25 with
225
one extra shape as a spare. An initial set of shapes of each single
colour was prepared, starting with 0.1 ml of pigment and adding
0.05 ml of pigment per every ive shapes (Figs. 1. & 2.).
Next, two pigments were combined in a single mix, starting
with 0.1ml of each colour (a total of 0.2 ml) and adding 0.05 ml of
each colour (a total of 0.1 ml) per every ive shapes The resulting
colours were then evaluated for hue, transparency and strength
of pigmentation in both their changed and unchanged states. In
their unchanged state, pigmentation strength is greatest in the
inal segment of each colour, with saturation levels nearing opacity, and weakest in the irst segment, creating a translucent inish. Translucence yields to opacity at around 0.3 ml of added pigment. This result was predicted and corresponds to expectations
formed from my past research in combining artists’ pigment with
silicone. The resulting colours of the combination samples follow
the general rules for colour mixing as demonstrated on a colour
wheel, and the resulting hues range from slightly disappointing to very pleasing although this is arguably a matter of taste
and artistic intent. With the application of heat, the samples go
through a variety of colour changes. In their irst changed state,
the lower temperature colour fades and reveals the underlying
higher temperature pigment. The samples appear as a lighter version of their unchanged colour at this stage, with some combinations such as blue and yellow displaying a very distinctive change,
others such as magenta and yellow displaying a more subtle outcome (Figs. 3. & 4.). If heated again the second pigment fades and
reveals a milky base colour with the dominant pigment in evidence as a pastel shade (Figs. 5-7.). It is possible to further modify
the colour response by introducing a permanent base shade consisting of artist or special effects pigments to the mixture, and I
am currently conducting tests to exploit the aesthetic possibilities inherent in this suggestion. Two pieces in which this idea is
explored are the Xylaria Brooch (Fig. 8.) and the Cocoon Necklace
Fig. 8 Xylaria Brooch in its changed
state, Katharina Vones (2013)
226
Fig. 9 Cocoon Necklace in its changed
state, Katharina Vones (2013)
(Fig. 9.). Both feature thermochromic silicone shapes which react
to environmental temperature changes but also contain a stable
base pigment which becomes visible once the thermochromic pigment fades. Thus the Xylaria Brooch changes from raspberry pink
to bright orange, whereas the Cocoon Necklace contains shapes
that appear violet and then fade to light blue. The latter also has
black 3D-printed components that have been treated with liquid
crystal technology and change through a peacock spectrum of
hues of green and blue from about 27˚C.
3 Digital Enchantment
While the exploration and use of smart materials constitutes
one area of my research, another equally important aspect is the
creation of an elusive characteristic deined by the term digital
enchantment (Rose 2014). Within the context of wearable futures,
this could best be described as the sensation of wonder and surprise created by an unexpected, captivating and apparently
spontaneous reaction between the object, its user, discreetly
embedded technology and its environment. It stands in direct
opposition to recent developments to commercialise the wearable futures market by focusing on miniaturising and adapting
already existing technologies to be worn on the body. Examples
of this include a number of smart watches such as the Samsung
Gear and the Apple Watch, as well as the much talked about Google
Glass. However, these devices have so far failed to capture the
imagination of users, with the Samsung Gear reportedly suffering
from poor sales (Amadeo, 2013) and Google Glass having recently
been removed from the consumer market altogether in order to
be developed solely for institutional and business use (Hedgecock
2015). Whilst sporting a multitude of arguably useful functions
such as cameras and internet access, these wearable devices are
227
very much rooted in the semiotics of traditional gadget culture,
introduced through popular culture icons such as James Bond and
Dick Tracy as early as the 1930s (Johnson 2011). Instead of discovering new ways to engage the wearer through playful interaction, this recent incarnation of wearable devices has maintained
an aesthetic and modes of usage irmly rooted within established
parameters by simply imbuing familiar types of body adornment
with novel technological content. My research addresses these
issues through exploring the ways in which an object worn on the
body is imbued with digital enchantment through encouraging
playful interaction with changes in the environment and biological impulses of the wearer.
3.1 Arduino – Accessible Electronics
The Arduino system of microelectronical components offers an
accessible starting point for those less experienced at assembling
electronic components and programming (Margolis 2011). As the
boundaries between digital art, craft and technology become more
blurred, the need for craft practitioners to become fully versed
in the vernacular of the digital becomes more pressing. Embedding electronics within wearable objects poses its own set of challenges, in particular that of miniaturisation and power supply.
While the latter is at the present time dependent upon technological developments that would exceed the scope of my research
project, the former is an issue that successive generations of ever
smaller components, such as the recent Adafruit Gemma, Flora
and Trinket microcontrollers, have begun to address (Fried 2014).
In order to imbue the wearable objects I am creating with a sense
of being ‘alive’ I initially started experimenting with a variety of
LED components that respond in some way to their environment.
Fig. 10 Arduino Uno RGB LED Colour
Organ, Katharina Vones (2012)
228
Fig. 11 Geotronic Brooch, Katharina
Vones (2013)
Fig. 12 Hyperhive Pendant (2 of 3),
Katharina Vones (2014)
The irst such circuit I created is a light sensitive colour organ
(Fig.10.). Using an Arduino Uno microcontroller board, three RGB
LEDs and three miniature photocells, the light sensitive colour
organ responds to changes in light levels to each of its three photocells by sending a corresponding colour value to the RGB LEDs
and changing the colour accordingly. By sensing different light
levels and expressing them through changing colours, the jewellery object reacts to environmental circumstances as a photosynthetic organism might. After testing on a breadboard, the circuit
is then recreated with an Arduino Pro Mini microcontroller board
to miniaturise the assembly for integration into a wearable jewellery object. As a development of the idea of creating an interactive synergy between wearer and object through the use of light,
the Geotronic Brooch (Fig.11.) incorporates a programmable RGB
LED that simulates the rhythm of a beating heart through its pulsations. Further advances towards creating synergetic jewellery
objects are evident in the Hyperhive series of stimulus-reactive
pendants (Fig.12.). Sensors that measure the heart rate, proximity
and touch of the wearer are integrated into 3D printed pendants
229
and react to intimate contact by changing colour, lighting up or
generating movement in combination with thermochromic silicone, this could generate a very dynamic and playful interaction
between the object, its wearer and the environment.
3.2 Thermochromic Silicone and the Wearable
Object
To fully exploit the colour responsiveness of thermochromic silicone without having to rely on a spontaneous reaction to changes
in environmental temperature, it is necessary to incorporate a
heat generating circuit into the wearable jewellery object which
in turn is activated by a sensor/microcontroller assembly. While
the use of heat sinks cut from thin copper foil or woven from conductive thread has been well established in the works of digital
textiles artists Maggie Orth (Orth 2007), Sara Robertson (Robertson 2011) and Lynsey Calder (Calder 2014), these approaches
are less suitable for use within thermochromic silicone, primarily because of its low shore hardness and inherent high lexibility, making the integration of such circuits at the manufacturing
stage precarious. An additional complication arises from incorporating effectively uninsulated conductive materials into a jewellery object made from precious metals such as silver or gold,
that are highly conductive in themselves and could cause short
circuits if accidental contact between the heating element and
components of the object was established. As a viable alternative,
a ceramic Peltier element can be used. Based on the principle of
the Peltier Effect of heat displacement through electric current,
Peltier elements rapidly heat on one side while equally rapidly
cooling on the reverse. This makes them very suitable for use in
wearable technologies, where a current driven, predictable and
directional heat source is often desirable, particularly where the
element might come into contact with the wearer. While copper
heat sinks can radiate heat on both sides of the circuit and thus
need to be fully embedded to protect the wearer, the cool side
of the Peltier element remains safe to handle, while generating
enough heat to trigger the thermochromic reaction on the reverse.
Temperature can be controlled by current supplied to the element,
making it possible to effect subtle colour changes in the silicone
shapes. One slight disadvantage is the relatively slow cycle of the
Peltier element once current is removed, making rapid successive
colour changes impossible.
230
4 Conclusion – Towards a Posthuman Future
Jewellery and the concept of adorning the body have a rich and
well-documented history of being imbued with meaning that
stretches beyond notions of wealth, value, social status, aesthetics and consumerist desire into the realms of emotional and
conceptual signiicance (Skinner 2013). Digital jewellery practitioners such as Sarah Kettley (Kettley, 2007) and Jayne Wallace
(Wallace, 2007) through their body of work have explored ways in
which technological developments can be used in a jewellery context to forge and enhance emotional connections through stimulating a meaningful interaction between the jewellery object and
its wearer. Other practitioners such as Norman Cherry (Cherry,
2006) have gone further by suggesting that eventually the boundaries between ornament and body will become indistinguishably
blurred through extreme modiications and implantable jewellery,
a development that radical jeweller Peter Skubic had already foreshadowed in 1975 with his performance Jewellery Under the Skin
(den Besten, 2013). The development of the ‘Carnal Art’ manifesto
by French artist Orlan as part of her project The Reincarnation of
Saint-Orlan from 1990 onwards, in which the artist’s body serves as
the site of repeated surgical interventions and modiications, can
be seen as a logical trajectory of this line of enquiry, albeit sited
within the discourse of feminist performance art (Hirschhorn,
1996). Against this backdrop of ongoing exploration, the development and expansion of the concept of the Posthuman body to question the role technology and body modiication could play in shaping the physical realities of the future, both on a functional and
aesthetic level, has gained increasing momentum (Hayles, 1999).
Having developed a range of stimulus-responsive jewellery
objects using smart materials and microelectronics, the question
remains how these wearable futures could be integrated even
more comprehensively into the body of the wearer. At present still
recognisably autonomous objects, current advances in transplant
technology and the ability to use human cells as a material in 3D
printing offer tantalising glimpses of a future where the body
could become host to near-organic, possibly artiicially intelligent
jewellery organisms. Moving towards a future in which technology could become permanently integrated into the complex systems of the Posthuman body I am intrigued by the possibilities
and challenges facing the contemporary jeweller in advancing the
debate surrounding the Posthuman and interactive adornment.
Potential practical applications for this line of investigation
exist in the areas of human–computer interaction, transplant
technology, medically assistive objects, identity management
and artiicial body modiication including prosthetics, where such
231
symbiotic jewellery organisms could be used to develop visually
engaging yet multifunctional enhancements of the body. The
intersection between technological reinement, the exploration of
smart materials and new manufacturing technologies as well as
the development of an aesthetic expression that supersedes ideas
of mere gadgetry is a challenge in this area of research and one
which I am in the process of addressing with my contribution to
the ield.
References
Amadeo, Ron. “Doa: The Galaxy Gear Reportedly Has a 30 Percent Return
Rate at Best Buy.” Ars Technica, 26th of October, 2013.
Calder, Lynsey. https://codedchromics.wordpress.com/.
Cherry, Norman. “Grow Your Own – Angiogenetic Body Adornment.”
SCAN Journal of Media Arts and Culture 3, no. 3 (2006).
den Besten, Liesbeth. “Europe.”. In Contemporary Jewelry in Perspective,
edited by Damian Skinner, 99-114. New York: Lark for Art Jewelry
Forum, 2013.
Fried, Limor. http://www.adafruit.com/.
Hayles, N. Katherine. How We Became Posthuman - Virtual Bodies
in Cybernetics, Literature and Informatics. Chicago & London: The
University of Chicago Press, 1999.
Hedgecock, Sarah. “Google Glass Startups Claim: Not Dead Yet.” Forbes,
23/01/2015 2015.
Hirschhorn, Michelle. “Orlan: Artist in the Post-Human Age
of Mechanical Reincarnation: Body as Ready (to Be Re-) Made.”
Generations & Geographies in the Visual Arts: Feminist Readings (1996):
110.
Huang, W. M., Z. Ding, C. C. Wang, J. Wei, Y. Zhao, and H. Purnawali.
“Shape Memory Materials.” Materials Today 13, no. 7–8 (2010): 54-61.
Johnson, Brian David. Science Fiction for Prototyping: Designing the
Future with Science Fiction. Synthesis Lectures on Computer Science. San
Francisco: Morgan & Claypool Publishers, 2011.
Kettley, Sarah. “Crafting the Wearable Computer: Design Process and
User Experience.” Napier University, 2007.
Margolis, Michael. Arduino Cookbook. 2nd ed. Cambridge: O’Reilly Media,
2011.
Orth, Maggie. http://www.maggieorth.com/index.html.
Robertson, Sara. “An Investigation of the Design Potential of
Thermochromic Textiles Used with Electronic Heat-Proiling Circuitry.”
Heriot-Watt University, 2011.
Rose, David. Enchanted Objects: Design, Human Desire, and the Internet of
Things. New York: Scribner Book Company, 2014.
232
Saburi, T. “Ti-Ni Shape Memory Alloys.” In Shape Memory Materials,
edited by K Otsuka and C M Wayman, 49-97. Cambridge: Cambridge
University Press, 1998.
Skinner, Damian. “The History of Contemporary Jewelry.” In
Contemporary Jewelry in Perspective, edited by Damian Skinner. New
York: Lark for Art Jewelry Forum, 2013.
Wallace, Jayne. “Emotionally Charged: A Practice-Centred Enquiry
of Digital Jewellery and Personal Emotional Signiicance.” Shefield
Hallam University, 2007.
Sonically Tangible Objects
Hanna Schrafenberger
xCoAx 2015
Computation
Communication
Aesthetics
and X
Glasgow
Scotland
2015.xCoAx.org
Media Technology Research Group, LIACS, Leiden University,
Leiden, The Netherlands
[email protected]
Edwin van der Heide
Media Technology Research Group, LIACS, Leiden University,
Leiden, The Netherlands
Studio Edwin van der Heide, Rotterdam, The Netherlands
[email protected]
Keywords: Presence, Tangibility, Augmented Reality, Virtuality,
Perception, Interaction, Haptics, Sound, Touch, Binaural Audio,
Tactile Stimuli, Gesture Control, Haptic Interfaces, Sound
Interfaces
A unique power of virtual objects is that they do not have to
look, feel or behave like real objects. With this in mind, we have
developed a virtual cube that is part of our real, physical environment but, unlike real objects, is invisible and non-tactile.
‘Touching’ this virtual object triggers binaural sounds that appear
to originate from the exact spot where it is touched. Our initial
experimentation suggests that this sound-based approach can
convey the presence of virtual objects in real space and result in
almost-tactile experiences. In this paper, we discuss the concept
behind, implementation of and our experience with the sonically
tangible cube and place our research in the context of tangible
interaction, perception and augmented reality.
234
1 Introduction
With the advent of augmented reality (AR), the virtual has become
part of our environment in a profoundly new way. Virtual objects
are no longer conined to virtual spaces, digital devices and displays. Rather, virtual objects can appear in and inhabit our real,
physical space and act as if they were actually present in our otherwise real environment.
Much research in augmented reality focuses on making virtual
objects as real as possible. Researchers and developers strive for
photorealism and aim for scenarios where virtual objects cause
the same occlusions and shadows as physical objects (see, e.g.,
Gibson and Chalmers 2003). Similarly, scientists include physics
simulations to make virtual objects adhere to physical laws and
move like real objects (e.g., Kim, Kim, and Lee 2011). In line with
this, there is a focus on tangible interfaces and techniques that
allow users to interact with virtual content in the same way as
they would with real physical objects (e.g. Billinghurst, Kato, and
Poupyrev 2008; Buchmann et al. 2004).
Our research follows another direction. Instead of imitating
reality, facilitating physical interaction or simulating real-world
properties, we want to create new experiences that have no equivalent in a purely physical world. We are interested in how augmented reality scenarios can differ from strictly physical, ‘unaugmented’ environments.
In this project, we explore a new, non-visual way of conveying the presence of virtual objects in real space. Presence is often
associated with the experience of ‘being present in a virtual environment’. However, we believe that another form of presence,
namely in the sense of ‘something virtual being present in our
real environment’, is key to augmented reality experiences. With
this project, we explore whether the presence of virtual objects
can be experienced through a combination of touch gestures and
spatial sound.
The project presented in this paper is motivated by two underlying considerations. Firstly, virtual objects do not have to look
or behave like real objects in order to be a believable part of our
real, physical space (cf. Schraffenberger and Heide 2013a). Secondly, virtual objects could potentially be perceived differently
from how real objects are perceived.
Inspired by this, we have developed a new kind of virtual
object – the so-called sonically tangible cube. Unlike real objects,
this cube is invisible and it does not provide tactile feedback.
However, ‘touching’ the virtual cube triggers binaural sounds
that appear to originate from the exact spot where it is touched.
Our initial experiments show that through this sonic feedback,
235
virtual objects can gain an almost-tactile quality and appear as if
they were actually present in real space. It is this idea of making
virtual objects both tangible and present through spatial sonic
feedback that distinguishes “sonically tangible objects”.
Several questions have fuelled the development of the virtual
cube and our research into sonically tangible objects. First and
foremost we were wondering if it is possible to leave out the tactile component in tangible interaction. If there is no tactile stimulation, would the virtual object still be perceived as part of real
space – and if so, would it be experienced as an object with a tactile, physical component? We were intrigued by how it could feel
to touch an object that provides no tactile sensations. Furthermore, we were eager to learn more about how virtual objects can
differ from real objects.
While we provide preliminary answers to these questions, the
focus of this paper is on the underlying concept of sonically tangible objects. (So far, inferences regarding the perceptual qualities
of the cube are based on informal testing and on our subjective
experience with the cube).
The central idea – that the cube is tangible but not tactile – calls
for a distinction between the terms tangible and tactile. In this
paper, things are called tangible, if they can be perceived by
touching or being in contact with them. Only objects that also
stimulate the tactile receptors (as found in the skin and tissue) are
referred to as tactile. This understanding leaves room for objects
that are tangible but not tactile.
The paper consists of 4 sections. In the following section (2), we
share choices made and insights gained during the development
of the project, describe the setup and implementation of the sonically tangible cube and discuss our experience with it. Following this (3), we compare the project with related work and place
it in the context of pertinent research ields, such as perception
research, augmented reality and tangible interaction. The paper
ends (4) with a relection on the project and possible directions for
future research.
2 The Sonically Tangible Cube
The sonically tangible cube is a virtual object. It is unlike any real
object in the sense that it is non-tactile, invisible and lacks physical properties, such as weight and temperature. It does, however, have sonic and spatial properties such as a shape, texture
and loudness. Although the cube has no tactile component, its
presence can be perceived through touch. When ingers enter the
cube, sound appears to originate from where the virtual object
is touched. The resulting sonic feedback not only corresponds to
236
the ingers’ positions but also its the movement of the ingers.
Fast inger movements result in more agitated soundscapes while
slower movement causes less dense, more distinct feedback. As
the cube is non-solid, ingers can move through it and explore its
inner texture.
2.1 Implementation and Setup
Fig. 1 A colleague explores the
virtual, invisible and non-tactile
cube. A Leap Motion Controller is
used to track the position of his
ingertips.
The virtual cube is 20 cm x 20 cm x 20 cm of size and it loats 10
cm above the work desk of one of the authors. The technical setup
consists of a Leap Motion Controller (www.leapmotion.com),
which detects the position of the participant’s ingertips in real
space. It is placed on the desk and senses hand movement above
the device (see Figure 1). A custom Max (2014) patch, which runs
on an Apple Mac mini, interprets the data provided by the Leap
Motion. Interfacing with the Leap Motion device is realized with a
Max external object ‘aka.leapmotion’ by Akamatsu (2014). In our
current setup, the frame rate of the Leap Motion device is around
57 fps when the ofice is naturally lighted and slightly above 200
fps when the amount of interfering infrared light is reduced by
darkening the room. The Max patch evaluates whether and where
the participant is touching the cube on the basis of the ingers’
coordinates. If the ingers are located within the 20 cm x 20 cm
x 20 cm area that has been deined as the cube, their movement
triggers pre-recorded binaural sounds. This interpretation of the
inger position works for every inger independently and allows
the participant to explore the cube with up to ten ingers at a time.
Constraints of the current setup are that the sound only
matches the ingers’ position if the participant is sitting at the
right spot and directly facing the cube. Also, due to the frame rate
237
of the Leap Motion device, very fast hand-movement can cause a
mismatch between the hand-position and the spatial information
of the triggered sound. Moreover, inger movement is sensed best,
if the hands are held horizontally.
2.2 Development
The sonically tangible cube was developed in an iterative process
during the course of several months. In the course of the project, the authors acted as researchers, developers and participants.
Additionally, colleagues were asked to provide feedback and
describe their experience with the cube on occasion.
From the beginning, we have explored the idea of making virtual objects tangible and present through sonic feedback. The
topic of (in)visibility was left aside for future research and hence,
many evaluations have been conducted with closed eyes. Two
determining observations and decisions were made concurrently
in the early stages of the development process.
Shape
One of the two early decisions regards the shape of the object. We
have started out with several simple geometric shapes and igures such as lines and planes and cubes. Our initial experimentation indicated that it is very dificult to experience a plane or
a line. Running one’s hands freely through a three dimensional
object and exploring its boarders and inner texture offered the
most intriguing, tactile-like experience and promised to convey
an object’s presence best. Hence we have decided to focus on a
cube-shaped virtual object.
Binaural Audio
The other decisive observation concerns the sonic aspect of the
project. In the beginning, simple synthesized clicks were played
back in mono (feeding the identical signal to both the left and the
right channel) through closed Beyerdynamics DT 770 Pro headphones whenever a virtual object was touched. This was done in
order to learn about the effects of linking movement in a certain
area to a basic sonic response. However, our initial trials showed
that the resulting experience was closer to being informed that
one’s hand had entered a predeined space rather than a direct
sensory experience of an object in space. This did not come as
a complete surprise. After all, interacting with real objects and
materials – crumbling paper, scratching on a surface, typing on
a keyboard or moving the mouse – causes sounds that originate
238
from the objects themselves and from the position where they are
touched rather than spatially uncoupled mono signals.
This is where the idea of using binaural audio in order to make
the sounds originate at the ingertips came into play. Binaural
audio is based on the notion that hearing makes use of two signals: the sound pressure at each eardrum (Møller 1992). If these
two signals are recorded in the ears of a listener, the complete
auditive experience – including the three dimensional spatial
information of the sounds – can be reproduced by playing the signals back at the ears.
In order to investigate the potential of binaural recordings,
we conducted some simple initial experiments. For example, we
recorded the sound of someone knocking on the closed ofice door
and the sound of the ringing phone while working in the ofice.
From these initial experiments it became clear that binaural audio
indeed can convey the desired experience. When listening back to
those recordings later on, the sounds seemed to originate from
those exact spots where they originally had happened. The virtual ringing of the phone was practically indistinguishable from a
real call. The simple knocking sound was powerful enough to create the illusion of ‘someone actually being behind the door’, and
hence proved capable to communicate the presence of something
or someone in real space. The use of binaural recordings has since
grown into a key aspect of the project.
The move to binaural audio went hand in hand with a switch to
open AKG K702 headphones. Due to the open nature of the headphones, the recorded sounds mix in with the sounds naturally
present in the environment. This additionally supports the experience that the virtual sonic object is inhabits our real physical
space rather than a virtual or separate space.
Recordings
What should the virtual sonic object sound like? The choice of
using binaural recordings introduced the question of what to
record. We were searching for sounds that (1) are abstract (do not
invoke the idea of a speciic real object), (2) have a tactile quality
and (3) support the idea of a non-solid object/material that allows
the ingers to move through it. Several different sound sources
have been tested during the development: for example, foils,
paper, plastics, packaging materials from everyday objects, rattles and empty bottles. All sounds were produced by interacting
with the materials with the hands and ingers. This choice was
based on the assumption that sounds that actually are created by
hand/inger movement are more likely to it the exploratory hand
gestures of the participant and more likely to create a tactile-like
239
experience. (In the same sense as the sound of squeaking nails on
a chalkboard can be an almost-tactile, physical experience, even
if someone else is scratching the board). For the current implementation of the sonically tangible cube, we have settled on the
sound of aluminum foil, produced by squashing a tiny plastic bag
illed with small crumbles of the foil.
To make the sounds appear as if they originate from the position where the cube is touched, a custom set of binaural recordings has been prepared. For this, we have divided the cube into
64 sub-cubes of 5 cm x 5 cm x 5 cm (see Figure 2). Five-second
samples of aluminum foil sounds were recorded at all 64 positions
within the cube. For this, we used a ZOOM H4 audio interface
and two DPA 4060 microphones. The microphones were placed
slightly above the ear-entrance of one of the authors and the
sound was recorded with a basic Max patch. For the recordings,
the author successively produced the desired sound by squashing
the little plastic bag and rubbing the aluminum crumbles against
each other at each of the 64 subareas. Aside from this, the author
was sitting motionlessly in front of the desk, facing the cube just
like participants do during the experience (see Figure 1).
Sound Design and Mapping
1 The changes in playback speed also
inluence the spatial characteristics
of the sounds. However, as those
variations were minimal this effect
was negligible.
When a participant interacts with the cube, the positions of his/
her ingers determine which of the 64 recorded audio samples are
played back. If a inger is placed in a sub-cube, the corresponding recording is activated. However, irst tests showed that simply
playing back the recordings resulted in a sound that only matched
the ingers’ positions, but not the different variations in hand and
inger movement (slow, fast, no movement, etc.). Hence, we have
experimented with more complex settings that map the movement of the ingers to parameters in the sound design.
Our current implementation knows two sound design settings.
Both react to each inger individually. The irst setting makes use
of granular synthesis. Here, the change of a inger’s position triggers the playback of an audio grain that is taken from the binaural
recordings. Each grain is between 10 ms and 20 ms long and is
varied slightly in pitch/playback speed.1 Furthermore, a random
offset is used to vary the position in the binaural recording from
where the grain is taken. This causes every grain to sound differently, which is crucial for the believability of the experience.
The second setting follows a similar underlying idea. Here, the
binaural recordings are layered and looped. A faster movement
activates more layers. Each active layer loops the ive-second
recording, starting at a random position within the sample and
playing it back with a slight variation of speed/pitch.1
240
Both settings result in a louder, more complex and dense
soundscape if the inger moves fast and in a softer, less dense but
more distinct soundscape if the movement is slow. As this happens for each inger individually, the amount of ingers used by
the participant has a similar effect: The more ingers are involved
in the exploration, the denser the sound. The two settings differ
with respect to textural nature of the sound. Whereas the granular synthesis results in a more gritty and rough soundscape, the
layered loops produce a thinner, airier sound texture.
For either setting, movement is necessary to ‘excite’ the virtual
cube and to elicit sounds. No movement results in silence, even if
the hand is placed in the cube. However, as it is impossible to keep
one’s ingers completely still, occasional slight tremble of the digits will cause corresponding sound output.
2.3 Experiencing the Cube
Fig. 2 The sonically tangible cube
was divided into 64 sub-cubes.
A binaural recording was made at
all 64 positions. Image of the cube
contributed by Wim van Eck.
How does the cube feel – does the experience really differ from
simply moving one’s hands through thin air? Do we perceive
the cube as present in space, do we perceive it as tangible? It is
important to systematically investigate this by performing experiments with a group of unbiased participants in the future. In the
following, we discuss our own experience with the sonically tangible object and compare it to known experience.
On one level, the experience can be compared to moving one’s
hand through a beam of light. We can clearly see the beam’s presence in space, but we cannot feel it. Similarly, in the case of the
cube, there is no traditional tactile feedback but our ears still tell
us that something is there.
On another level, experiencing the cube better compares to
feeling out a physical object blindly with one’s hands. After all,
it is only through the physical act of touching that we can perceive the cube in the irst place. There is no notion of the object,
unless one is in contact with it. Also, like in typical haptic perception, the experience of the object takes time and happens through
exploratory gestures with one’s ingers. Furthermore it is similar
to touching a real object in the sense that this, too, can cause
sounds at the corresponding position.
Yet, the experience is also inherently different from interacting with a physical object. One can, for example, not hold, move
and turn the object. Instead, it is possible to move right through
the cube and explore its inner texture and structure. Also, it
is impossible to simply follow the contour of a sonically tangible object and to explore its shape that way (cf. Lederman and
Klatzky 1987). Rather, the contour can be perceived by repeatedly crossing (zigzagging around) the boarder of the object and
241
moving in between the sonic space of the cube and the silent
space surrounding it.
Last but not least, interacting with the cube has similarities
with playing gesture controlled open-air instruments such as
the Theremin. (The Theremin is played by moving one’s hands in
the space between two antennas.) Also here, movement in space
results in sonic output that corresponds to the position of the
hands.
While it remains dificult to put the experience of the cube in
words, one thing seems clear: Touching the cube is different from
simply moving one’s hands through thin air. When we move our
hands through air, we feel nothing but empty space. The cube, in
contrast, inhabits the space. While empty space simply is experienced as empty, the cube is experienced as something that is
present and that can be touched. Although the experience is not
tactile in the traditional sense, it deinitely has tactile-like aspects.
3 The Cube in Context
Our project is multi-disciplinary; it draws from and contributes
to various ields of research, such as augmented reality, tangible interaction and perception. In this section, we take a second
look at the cube and discuss the virtual sonic object in the light of
related research.
3.1 Tangibility and Presence
The cube deals with (in)tangibility, requires active bodily engagement and it explores the possibilities of a tangible experience
without tactile stimuli. As such, our research relates to the ield
of tangible and embodied interaction. Furthermore, the cube is
concerned with the presence of virtual objects in real space, and
hence relates to the ield of augmented reality. Tangibility and
presence are closely linked. Presence is a necessary condition
for tangibility. We can only touch an object, if it is present. If we
touch an object, we and the object are both present in the same
space – at least in a mediated way.
In a broad sense, the sonically tangible cube relates to all projects, where virtual objects are perceived as present in real space.
In particular, it relates to those projects that use sound and/or
tangible interaction to convey the presence of (invisible) virtual
objects in real space.
A project where the presence of something virtual is perceived
tangibly is Sekiguchi, Hirota, and Hirose’s (2005) so-called Ubiquitous Haptic Device. The little box, when shaken, conveys a feeling
of a virtual object being inside the device. Similarly, a wearable
242
haptic device by Minamizawa et al. (2007), called the Gravity
Grabber, allows participants to perceive the rufle of the water in
a glass, although he/she actually is holding an empty glass.
Projects that let a participant experience the spatial presence of “something that is not really there” by means of sound
are Cilia Erens’ and Janet Cardiff’s sound walks (Erens, Cardiff,
cf. Schraffenberger and Heide 2013b). Both artists use binaural
recordings of everyday sounds that blend in with the sounds present in the real environment when the participant navigates the
space and listens to the composition on headphones. Listening to
the binaural audio creates a hybrid space in which the virtual and
the real coexist, relate to one another and create “a new world as
a seamless combination of the two” (Cardiff).
A discussion of Janet Cardiff’s work by Féral (2012) also helps
our understanding of sonically tangible objects. The researcher
deines “presence effects” as the feeling that an object (or body)
is really there, even when one knows that it is not. This relates
to the experience of the sonically tangible cube. While the ears
make it feel as if the cube is present, the lack of tactile (and visual)
stimuli informs us that nothing is there.
3.2 Open Air Instruments & Sound
Installations
Our project relates to the ield of sonic interaction. In particular, it relates instruments and installations that use hand or body
gestures in free space to produce sound, such as the above mentioned Theremin. Like our research, such gesture instruments
and installations are based on a mapping between body movement and sound.
The artwork ‘Very Nervous System’ (1986-1990) by David
Rokeby is an early example of an interactive sound installation
where body movement in open space generates sound. However, the sound of such artworks and instruments like the Theremin usually does not appear to originate from the location of
the movement, which is a key difference from sonically tangible
objects. Furthermore, with few exceptions, they do not (try to)
express the presence of virtual objects in space.
One exception – an instrument that actually does convey the
presence of virtual objects in space – is the invisible drumkit by
Demian Kappenstein and Marc Bangert (The Invisible Drums of
Demian Kappenstein and Marc Bangert. 2011). In their invisible
setup, each virtual drum is placed at its regular position in space.
Hitting the invisible virtual drums triggers pre-recorded samples of a real drumset. The position of the sticks and the speed of
the movement determine which sample is triggered. Similarly to
243
the cube, the virtual drum kit becomes perceivable through the
interaction.
3.3 Human-Computer Interaction
One possible area of application for sonically tangible objects is
the ield of Human-Computer Interaction, and in particular intangible displays. Intangible displays are visual virtual interfaces
that appear in mid-air, in front of a user’s eyes. Aside from simply
displaying information they also allow for interaction: Users can
touch virtual objects, such as buttons, with their physical hands.
However, intangible displays do not provide tactile feedback when
they are touched. Chan et al. (2010) address this lack of tactile
feedback by providing visual and audio feedback. In their experiments, they played short sounds whenever participants touched
the surface of the intangible display. Their project differs in the
sense that sound is used to inform the user about the fact that
they have successfully touched the object (as feedback) and not as
an integral part of the object.2
Another related HCI project is the so-called BoomRoom (Müller
et al. 2014). In this room, sounds seem to originate from certain
spots in real space (this is realized with a circular array of 56 loudspeakers and Wave Field Synthesis). These sounds can be ‘touched’
in order to grab, move and modify them. Although related, their
project differs in the sense that it focuses on the localization and
direct manipulation of sound rather than on the presence and
tangibility of virtual objects.
3.4 Perception
Haptics
2 Although originally not intended
this way, the concept of sonically
tangible objects could be used
to improve the interaction with
intangible displays. It could increase
the spatial presence of the display,
provide better feedback about the
users hand position and movement
through the display and is likely to
make the “the awkward feeling of
‘touching’ a mid-air display” (Chan
et al. 2010, p. 2626) less awkward
and more tactile-like.
The sonically tangible cube is perceived by explorative hand
gestures. This links it to the ield of haptics. Haptic perception
typically involves active exploration (Lederman and Klatzky
2009). Haptics is commonly understood as a perceptual system
that derives and combines information from two main channels:
kinesthetic perception and cutaneous sensation (Lederman and
Klatzky 2009). Cutaneous sensation is derived from the receptors that are found across the body surface and that allows us to
feel, for example, pressure or temperature. The kinesthetic channel refers to perception of limb position and movement in space,
which is derived from the receptors embedded in muscles, tendons and joints.
Kinesthetic perception also plays a key role in the perception of the virtual cube – it provides the participants with the
244
information about where and how fast their ingers are moving
in space. This awareness is crucial in order to link what one hears
to one’s movement in space. What makes the perception of the
sonically tangible cube different from common haptics is the lack
of cutaneous feedback (including tactile sensations). Rather than
‘feeling something at the position where they touch an object’ the
participants ‘hear something at the position where they touch the
object’.
Tactile Illusions and Cross-modal
Interactions
The sonically tangible cube aims to create a tactile-like experience. There are several studies that indicate that sound can inluence actual tactile experiences. The “Parchment-skin illusion”
(Jousmäki and Hari 1998) shows that modifying the sounds that
accompany hand-rubbing can inluence the tactile sensation of
the skin. It was found that accentuating the high frequencies can
lead to the experience of a higher level of skin roughness. Hötting
and Röder (2004) have discovered another auditory-tactile illusion. In their experiment, one tactile stimulus was accompanied
by several tones. As a result, participants reported that they perceived more than one tactile stimulus. What sets these illusions
apart from our cube is that in both cases, the participants were
presented with a tactile stimulus.
Sensory Substitution
The cube relates to projects that use sound to substitute touch.
One such sensory substitution system is F-Glove (Haidh et al.
2013). This haptic substitution system aims at helping patients
that suffer from the symptoms of Diabetic Peripheral sensory
Neuropathy, such as sensory loss at the ingertips and resulting
dificulties with manipulating objects. F-Glove uses audio feedback to inform the patients of the pressure they apply to objects.
The volume of the sound is mapped linearly proportional to the
applied pressure. Unfortunately, it is not clear whether the system
simply informs the patients of the pressure they use via sound or
whether they start experiencing pressure directly, via the auditory sense. Naturally, the experience of the cube is quite different
from not having a sense of touch, as your hand can simply reach
through the virtual sonic object.
245
4 Relection & Outlook
With the sonically tangible cube we have introduced a prototype
of a sonically tangible object and a new, sound-based form of
augmented reality. The proposed cube is invisible and non-tactile. According to our experience, it is nonetheless perceived as
spatially present in our real, physical environment. This suggests
that virtual objects do not have to look or feel like real objects in
order to be a believable part of our real, physical space.
The virtual cube is non-tactile and yet tangible. The experience
of the cube can be seen as one possible answer to the question of
how it could feel to touch an object that provides no tactile feedback. According to our impression, the virtual sonic object offers
an almost-tactile experience that has no equivalent in a purely
physical world. However, this still has to be conirmed by experiments with unbiased participants.
The current implementation of the cube primarily serves as a
proof of concept. While we are happy with its current state, we
have many ideas on how to improve the cube and explore the concept of sonically tangible objects further.
Concerning the sonic qualities, future experiments can reveal
which sounds are most suitable for creating tactile-like experiences and possibly test whether sounds that are created with the
hands work best. It would be interesting to ind out more about
how to sonically represent imaginary material and communicate
different densities, textures and shapes with sound.
So far, we have chosen to work with binaural recordings. In the
future, it will be valuable to explore computational methods for
simulating the sounds’ origins in space. If this is successful, it will
be much easier to allow participants to move through space freely
and experience the cube from different positions. Furthermore, it
will be simpler to create polymorphic sonically tangible objects of
different shapes and sizes and to place them at various positions
and in different spaces.
One aspect that was left aside so far is the topic of (in)visibility. This offers several intriguing directions for future research.
For example, we are eager to learn how participants interpret the
absence of visual clues. On the one hand, it might lead to a contradiction between senses: “I can hear it, but I see that nothing
is there”. On the other hand, it could be interpreted as a property of the object: “Something is there, it is invisible”. Further, it
would be interesting to compare the experience of the cube with
open and closed eyes, and, as an additional condition, also add a
visual dimension to the cube (e.g., by means of a head-mounted
display) to learn more about the inluence of (in)visibility on the
experience.
246
One limitation of this research is that so far, our inferences
are based on informal tryouts and our own subjective experience
with the cube. Our experience might not fully represent how others perceive the cube and we cannot entirely rule out the possibility that it is inluenced by the expectations and hopes we have for
the project. We plan to extend the presented research and conduct
experiments with unbiased participants in the near future.
Due to its interdisciplinary nature, the project has also raised
questions that go beyond our own area of expertise. For example, it would be interesting to learn more about what happens on
a perceptual level. Are sound and kinesthetic information combined, similarly to how cutaneous information and kinesthetic
information are integrated in traditional haptic perception? Can
the combination of spatial sound and kinesthetic information lead
to cross-modal interactions? What happens if the spatial information of the audio does not match the position of the ingers?
Do we perceive the lack of tactile stimuli as “something missing”
and do we ill in this information? We have put much emphasis
on describing the concept in a way that allows other researchers to reproduce it and join our investigation of sonically tangible
objects.
Acknowledgements
We want to thank all participants, colleagues and friends who
interacted with the sonically tangible cube and/or provided feedback throughout the development of this project. In particular, we
want to thank Wim van Eck for creating the image of the cube.
Furthermore, we want to thank the reviewers for their valuable
response.
References
Akamatsu, Masayuki. 2014. aka.leapmotion (Version 0.21). Max external.
Creative Commons Attribution 3.0 Unported License. http://akamatsu.
org/aka/max/objects/.
Buchmann, Volkert, Stephen Violich, Mark Billinghurst, and Andy
Cockburn. 2004. “FingARtips: Gesture Based Direct Manipulation in
Augmented Reality.” In Proceedings of the 2nd International Conference
on Computer Graphics and Interactive Techniques in Australasia and South
East Asia (GRAPHITE ’04), 212–221. ACM.
Cardiff, Janet. Introduction to the audio walks. http://www.cardiffmiller.
com/artworks/walks/audio_walk.html.
247
Chan, Li-Wei, Hui-Shan Kao, Mike Y. Chen, Ming-Sui Lee, Jane Hsu,
and Yi-Ping Hung. 2010. “Touching the Void: Direct-touch Interaction
for Intangible Displays.” In Proceedings of the SIGCHI Conference on
Human Factors in Computing Systems (CHI ’10), 2625–2634. ACM.
Erens, Cilia. The Audible Space. http://www.cilia-erens.nl/
cilia-erens-2/?lang=en.
Féral, Josette. 2012. How to deine presence effects: the work of Janet Cardiff.
Edited by Gabriella Giannachi, Nick Kaye, and Michael Shanks. 29–49.
Routledge.
Gibson, Simon, and Alan Chalmers. 2003. “Photorealistic augmented
reality.” In Eurographics 2003 Tutorial. Granada, Spain, September.
Haidh, Basim, Hussein Al Osman, Majed Alowaidi, Abdulmotaleb
El-Saddik, and Xiaoping P. Liu. 2013. “F-Glove: A glove with forceaudio sensory substitution system for diabetic patients.” In 2013 IEEE
International Symposium on Haptic Audio Visual Environments and Games
(HAVE), 34–38. IEEE.
Hötting, Kirsten, and Brigitte Röder. 2004. “Hearing cheats touch, but
less in congenitally blind than in sighted individuals.” Psychological
Science 15 (1): 60–64.
Jousmäki, Veikko, and Riitta Hari. 1998. “Parchment-skin illusion:
sound-biased touch.” Current Biology 8 (6): R190–R191.
Kim, Sinyoung, Yeonjoon Kim, and Sung-Hee Lee. 2011. “On Visual
Artifacts of Physics Simulation in Augmented Reality Environment.” In
Proceedings of the 2011 International Symposium on Ubiquitous Virtual
Reality (ISUVR ’11), 25–28. IEEE.
Lederman, Susan J., and Roberta L. Klatzky. 1987. “Hand movements:
A window into haptic object recognition.” Cognitive Psychology 19 (3):
342–368.
———. 2009. “Haptic perception: A tutorial”. Attention, Perception, &
Psychophysics 71 (7): 1439–1459.
Max. 2014. Visual programming language. Verison 6.1.9. https://cycling74.
com/products/max/.
Minamizawa, Kouta, Souichiro Fukamachi, Hiroyuki Kajimoto,
Naoki Kawakami, and Susumu Tachi. 2007. “Gravity Grabber:
Wearable Haptic Display to Present Virtual Mass Sensation.” In ACM
SIGGRAPH 2007 Emerging Technologies (SIGGRAPH ’07). ACM.
The Invisible Drums of Demian Kappenstein and Marc Bangert. 2011. Blog
post, October. http://www.moderndrummer.com/site/2011/10/theinvisible-drums-of-demian-kappenstein-and-marc-bangert.
Møller, Henrik. 1992. “Fundamentals of binaural technology.” Applied
Acoustics 36 (3–4): 171–218.
Müller, Jörg, Matthias Geier, Christina Dicke, and Sascha Spors.
2014. “The BoomRoom: Mid-air Direct Interaction with Virtual Sound
Sources.” In CHI ‘14 Proceedings of the SIGCHI Conference on Human
Factors in Computing Systems, 247–256. ACM.
248
Rokeby, David. 1986–1990. Very nervous system (interactive sound
installation), http://www.davidrokeby.com/vns.html
Schraffenberger, Hanna, and Edwin van der Heide. 2013a. “From
Coexistence to Interaction: Inluences Between the Virtual and the
Real in Augmented Reality.” In Proceedings of the 19th International
Symposium on Electronic Art, ISEA2013, edited by K. Cleland, L. Fisher,
and R. Harley, 1–3. Sydney.
———. 2013b. “Towards Novel Relationships between the Virtual and the
Real in Augmented Reality.” In Arts and Technology, edited by Giorgio
De Michelis, Francesco Tisato, Andrea Bene, and Diego Bernini, LNICST
116:73–80. Springer Berlin Heidelberg.
Sekiguchi, Yuichiro, Koichi Hirota, and Michitaka Hirose. 2005.
“The Design and Implementation of Ubiquitous Haptic Device.” In
Proceedings of the First Joint Eurohaptics Conference and Symposium on
Haptic Interfaces for Virtual Environment and Teleoperator Systems (WHC
’05), 527–528. IEEE.
Making a Magic Lantern,
Horror Vacui Data Projector
xCoAx 2015
Computation
Communication
Aesthetics
and X
Mark Hursty
National Glass Centre Research, University of Sunderland, UK
[email protected]
www.markhursty.co.uk
Victoria Bradbury
Glasgow
Scotland
2015.xCoAx.org
CRUMB, University of Sunderland, UK
[email protected]
www.victoriabradbury.com
Keywords: Magic lantern, horror vacui, glass art, electronic
art, creative data imaging, digital manufacture, waterjet
cutting, computation, aesthetics, algorithm, interdisciplinary
collaboration.
This paper describes the creative process behind an artwork that
combines and projects data in sculptural ways. This projection
comes in the form of a reimagined magic lantern device called
the Magic Lantern Horror Vacui Data Projector. This device is the
result of collaborative glass art and electronic art techniques.
Central to the projection system are re-envisaged glass magic
lantern slides. No longer lat, they are squat six-sided boxes made
entirely of glass. These slide boxes are illed with data-representational glass forms through which light is projected. The projected
images are emitted from a three dimensional aggregate of data
represented by coloured pieces of transparent glass. The appearance of the projection is manipulated by positioning these slides
along varying axes through servomotors. Code is being developed
to read input from the projection, generate additional data, and
control the positioning of the boxes.
250
1 Introduction
Horror Vacui: In visual arts, it is the fear of leaving empty spaces unadorned.
(Ettinghausen, 1979) In science, horror vacui refers to the physics postulate
of ‘nature abhors a vacuum’. (Grant, 1981)
Fig. 1 Magic Lantern Horror
Vacui Data Projector, 2015,
Glass, servomotors, Arduino
microcontroller.
The Magic Lantern Horror Vacui Data Projector (MLHVDP) uses
analogue projection, creative glass and electronic art techniques
that integrate and re-process data cyclically using sensors and
servomotors. The term horror vacui relates to the function and
appearance of the device as it is illed with data. The creative beneits of this project are that it combines illuminated kinetic glass
sculpture and performative computational modes to counter the
usually tacit, inadvertent, and invisible process that data undergoes as it is transferred through networks.
The projector uses clear and coloured glass segments to represent data. The transparent pieces, placed by a projectionist,
connect and overlap in the visual arts sense of horror vacui; a
fear of empty spaces. These palimpsestic shapes are seemingly
indecipherable at irst. While the data forms may seem arbitrary
and practically unintelligible, they will be entered into a system
that reads and re-interprets their shapes and imagery. The tension caused by this abstraction and organisation relates to Etinghausen’s description of managing chaos through tessellated tiling
techniques in Islamic art.
Each unit is then completely illed with a design, or at least as close as
possible. Although the motifs are now repetitive, the horror vacui is again
managed in an esthetically satisfactory manner. (Ettinghausen, 1979)
Fig. 2 Glass Slide Box with data
projection, 2015, Waterjet cut fusing
glass.
The ‘satisfaction’ Ettinghausen describes in relation to tiling
motifs, here, relates to the mediation of data by a projectionist
who collates and illuminates the information by arranging and
constructing interlocking 3D glass shapes inside of the boxes. The
way that glass represents data is through colour-coding; assigning each colour and shape a category and value. This data is being
collected from weather and location tracking sensors through a
collaboration with researchers at The Centre for Doctoral Training in Cloud Computing for Big Data at Newcastle Science Central.
The main objectives of this paper are to demonstrate an alternative to familiar modes of data display by screen-based computing devices and to extend creative collaboration between glass
and electronic art. The aims supporting this objective are; to
reinterpret artistic precedents as inspiration for developing an
analogue, manually interactive and multidimensional system of
251
data representation and to develop code to govern the operation
and interpretation of the device.
2 Inluential Artworks
Fig. 3 Raree Show, 2009, Bradbury,
Hornell, NY, USA.
Fig. 4 Data Raft, 2014, Bradbury,
Sunderland, UK.
The MLHVDP draws upon processes used previously in the authors’
respective art practices. Victoria Bradbury has combined antique
magic lantern technologies with digital/analogue projection and
personal data with performativity. Mark Hursty has implemented
pressed glass and waterjet cut glass techniques through creative
and innovative applications. The MLHVDP is also inluenced by
Fabio Lattanzi Antinori’s The Obelisk, 2012 and Semiconductor’s
Data Projector, 2013, both of which combine sculptural elements
with data. Finally, Joseph Cornell’s 1936 found footage ilm, Rose
Hobart, is related to this project as an art historical reference.
In Raree Show, 2009, Bradbury live-projected 116 hand-drawn
magic lantern slides using a 1940’s opaque “Radiopticon” projector. The imagery appeared on a screen on the side of a sculptural
circus wagon. The performance portrayed alternate outcomes of
networked culture paired with inancial collapse. The use of magic
lantern techniques and slide performativity in Raree Show, 2009,
inluence the use of analogue slide projection in this new project.
In Data Raft, 2014, Bradbury created a forum for gallery visitors to retrieve and re-contextualise their personal data. Code
and making processes were used to transform email metadata
from intangible and private to tangible and public. A participant
built a stick raft, attached a bespoke computer embroidered sail
with their select data points, and set the vessel aloat on pools
installed in the gallery.
This work underlines the performativity of the programmatic
processing that data undergoes online. Data points are frozen in
time as they are removed from the browser and embroidered on
the sails. While engaged in a hand craft process, participants are
temporarily unable to use their mobile devices to unconsciously
generate additional data for the network. Data Raft, 2014, led to
MLHVDP, which aims to abstract data through projection and
sculptural means.
Central to the MLHVDP are the transparent glass slide boxes.
These boxes, and the methodology behind their creation, emerged
from Hursty’s work with waterjet cut transparent pressed glass
moulds in a series called Puzzle Boxes, 2014. In that on-going
research, the one-time use boxes are illed with molten glass and
then pressed with a central glass plunger, which fuses all of the
elements as one object. The mould-pressing process itself illustrates horror vacui as the mould’s voids are completely illed
with glass. Using hot glass also resonates with Edward Grant’s
252
description of ancient horror vacui experiments, most notably
with burning candles.
“Amongst the most striking illustrations that nature abhorred
a vacuum were those employing ire and heat.” (Grant, 1981)
The boxes were conceived to replace, in a didactic sense, opaque
metal press moulds, so that practitioners can see how the molten
glass behaves as it ills the mould crevices. The signiicance of
this grew as fusing a mould into the inished object changes how
that mould can be perceived; from a temporary apparatus to an
essential component of the inished artwork. This methodological problematising of ancient craft precedents is also at work in
the MLHVDP through reinterpreting projection devices and data
display.
Fig. 5 Puzzle Box Press Molds, 2014,
Hursty, waterjet cut fusing glass,
Glass Art Society Demonstration,
Chicago, IL, USA.
Fig. 6 Puzzle Boxes, 2014, Hursty,
Shanghai Museum of Glass, China.
The process of making melting glass boxes requires complex
glass-to-glass joinery with no metal fasteners. This is because
the coeficient of expansion (COE) of glass is different than that
of most metals. The negative result of differing COE are cracks
where the different glasses or metal fasteners intersect and join.
Where fasteners were needed for the Puzzle Boxes, they too were
made out of glass so that they would also melt into the inal box.
Such complex joinery led to the title of Puzzle Boxes. Assembling
them was like manipulating Chinese puzzle boxes, wood or ivory
puzzles that were exported to the West in the nineteenth century.
While experimenting with molten glass and exhibiting the Puzzle Boxes, it became apparent that the boxes themselves acted as
miniature projection devices. With only natural light as a source,
they were like opaque projectors with light transmitting through
their structural elements and projecting compelling detail on
nearby surfaces. This discovery sparked the idea to use the boxes
as the basis for a system of analogue projection. The MLHVDP for
sculptural data projection is the irst result of such a system.
253
Fig. 7 The Obelisk, 2012, Fabio
Lattanzi Antinori.
Fabio Lattanzi Antinori’s The Obelisk, 2012, serves as a precedent for pairing data with a sculptural glass object. Here, data,
based upon a live feed of news stories about crimes against humanity, controls the levels of opacity of a box. The box is made of sheet
glass (or Perspex) and electrochromic ilm, which changes from
opaque to transparent depending upon whether electrical current
runs through it. The streaming data causes each side of the box to
independently alternate from opaque to transparent. This creates
variable views of how the box and the sculpture behind it can be
seen. Connections can be made between the clarity of the boxes’
facets, war crimes, and levels of awareness raised by their reported
descriptions in the media. Whereas The Obelisk, 2012, is static and
is viewed in both transmitted and relected light, the slide boxes
in The MLHVDP are physically in motion and depend on transmitted light to be projected.1 The formal difference between the
two approaches is that Lattanzi’s work is encountered by looking
at the glass box directly, while Hursty/Bradbury’s is meant as a
type of lens that directs and focuses light elsewhere.
Fig. 8 Data Projector, 2013,
Semicondctor.
1 A concise distinction between
the transmitted and relected
light follows in this example from
microscopy. “A trans roscope mitted
light michas a light source below the
microscope stage and sends light
upwards towards the sample and up
to the viewing point. A relected light
microscope has a light source above
the sample and what is seen though
the view point are light waves that
have relected off the sample.”
The art duo Semiconductor, Ruth Jarman and Joe Gerhardt, created Data Projector, 2013. This piece offers a creative example of
integrating digital projection, sculptural forms, and data processing. It uses data gathered from a forest, charting the tree canopy
from the vantage point of an observation tower over the course of
a year. According to the artists, “There’s a sense of the hand made
at work; the clunky tower and the hand-made carbon paper, suggesting the presence of man as observer trying to make sense of
the world. Yet, there’s also a precision which comes with the data,
bringing structure and rhythm and creating a sense of complexity to what we see and hear. This conversation between analogue
and digital plays with the divide between how science represents
nature and how we experience it.” (Jarman and Gerhardt, 2015)
254
Fig. 9 Rose Hobart, 1936, Joseph
Cornell, found footage ilm.
2 In 1936, Cornell debuted Rose
Hobart at the Julien Levy Gallery
in New York City. His ilm, the
footage for which he scavenged
from destruction, drastically
re-edits the feature ilm
East of Borneo, 1931. This
recontextualisation of East
of Borneo’s subject matter was
emphasised by Cornell’s choice
to project Rose Hobart through
a tinted piece of blue glass.
In the audience at the gallery were
André Breton, Salvador Dalí and
other contemporaries who were
participating in the MOMA’s irst
surrealist exhibition of 1936.
Dalí’s absurd, and apparently jealous,
reaction to the ilm was to knock
over Cornell’s projector and call him
a thief. Dalí claimed to Levy that,
“My idea for a ilm is exactly that,
and I was going to propose
it to someone who would pay
to have it made… I never wrote
it or told anyone, but it is as if
he had stolen it.” (Solomon, 1997)
The structure, rhythm and complexity Jarman and Gerhardt
mention is emphasized in the unusual, round-shaped projections that are a result of recording the forest canopy in 360°.
The MLHVDP also uses attributes of analogue construction. Its
structural apparatus, the slide boxes, serve as an architectural
framework in the way that the wooden observation tower does
in Semiconductor’s piece. In the MLHVDP, however, the architectural components also function as a lens for the data and as the
data itself. The result of this condensed form and function are
projections that are asymmetrical rather than rectangular, as in
familiar modes of data display.
Rose Hobart, 1936, is a pioneering found footage ilm by the
American surrealist artist Joseph Cornell. The ilm was signiicant as a new way to gather, process and project ilm. This significance is relected in the creation and use of data in the MLHVDP.
Also inluential to this new work was a notorious incident that
occurred the irst time Rose Hobart was shown.2 During the ilm’s
debut Salvador Dalí knocked over the projector and accused Cornell of stealing the idea of found footage ilm from Salvador Dalí’s
subconscious.
Several aspects of the MLHVDP are summed up in this incident.
First, Cornell’s unorthodox use of collected and re-edited footage offered a new way of seeing and reading the content. Found
footage ilmmaking can be compared to physically salvaging data
then reorganizing it, reinterpreting it, and re-projecting it; even
down to reorganizing the mechanics of projection in order to
obtain a new understanding of the material. Second, when Dalí
knocks over Cornell’s projector and claims that Cornell stole his
subconscious idea, the projector becomes a proxy for Cornell himself. Dalí’s violence is directed toward the mechanics of the projection. This mediates the ramiications of content on the projectionist, with violence directed through the projection device.
The essential dynamic of this relationship is re-performed by
the MLHVDP in a loop of data collection and visualisation that
has been abstracted, projected then reinterpreted through code.
With respect to who collected the data and how (the researchers through sensors), and who abstracted the data and how (the
authors through projection), this dynamic emphasizes a creatively
constructive and malleable view of attribution.
The above artworks represent different approaches that are
relected across the iterations of the MLHVDP. Like the MLHVDP,
their combined methods problematize the mechanics of projection, challenge orthodox screen-based modes of data display, and
serve as a provocation for how data can be collected and processed.
255
3 Developing the Magic Lantern Horror Vacui
Data Projector
3.1 Making the Projector
The MLHVDP has been developed through a series of steps. The
glass was designed, cut and constructed at the National Glass
Centre at the University of Sunderland; the projector was performed at Gateshead Algorave #2; it was exhibited through Gateshead Arts and it is being further developed through the use of
site-speciic data.
The glass components of the projector base and slide boxes
were constructed through waterjet cutting. The slide boxes were
assembled atop a transparent Stewart platform to allow light to
pass through. In this way, the light source could be placed either
above or below the motors. If placed below, the mechanics of the
base are projected in addition to the glass boxes (Figure 1). The
platform has servomotors at the base, which are programmed
through an Arduino microcontroller. At the Algorave, this kinetic
work was performed live as Bradbury added and subtracted transparent coloured pieces within the glass box. A variety of light
sources were tested to project through the glass onto the walls of
the performance space.
Observations from the Algorave performance indicated that the
projection is most complex and engaging when the glass boxes
are illed with three-dimensional glass objects, which can be
turned in space and evaluated from different vantage points. In
comparison, a lat slide can only be viewed in one way. Tunnelling
light through these transparent structures presents the edges,
Fig. 10 Magic Lantern Horror Vacui
Data Projector, Performance, 2015,
Gateshead Algorave #2.
256
three-dimensionally. The movements of the projected image are
visually similar to toggling 3D computer rendered objects in virtual space.
Next, the authors are working with the cloud computing
researchers who are providing weather and location tracking data
sets that are speciic to the city of Newcastle, UK and the building
in which they work, The Core in Newcastle Science Central. As
this collaboration develops, the researcher’s data sets will serve as
inspiration for how data can be encoded within the coloured glass
structures, and, in turn how their projections can be decoded.
3.2 Properties of the Projector
The visual properties of glass that underpin the MLHVDP are clarity and uniformity of colour. If these qualities are used creatively,
light can be transmitted through glass in stimulating ways that
transcend its material reach. The physical properties of glass that
underpin this device are that it is electrically inert, inlammable
and archival. When the glass is projected, then read and re-interpreted by code, the enticing properties of glass, code, and data
integrate and expand into new forms. These forms, unless broken,
are permanent. Unlike digital archives, they will not degrade over
time.
The glass data forms are encoded and decoded as they are projected. This coding process is twofold. The irst stage is projected
light in the form of a bespoke overhead projector. The slide box
can be projected lat like a conventional slide, but it also has the
range of motion to be rotated and tilted for 3D views. As it moves,
variable perspectives of the sculpture illuminate the space. The
goal of the second stage, which is still under development, is to
create and algorithm that will read and interpret visual information from the projection, then re-position the slide box to frame
certain vantage points.
While data is now predominantly considered and viewed in digital ways, Sara Diamond emphasizes its fundamental analogue
nature when she states, “Data can be numbers, words or images.
Data can be collected manually (as it has been for centuries), and
then put into a computer.” (Diamond, 2009) The MLHVDP returns
data to an analogue, material state, contrasting this fragility and
traditionally static nature of cold glass with the ephemeral, malleable nature of both data and projection.
The re-envisioning that this piece enacts allows us to apprehend multiple views of the ‘data’, a process that would normally
be abstracted and highly obfuscated. These multiple perspectives
could be helpful as a tool to visualize abstract data as concrete
interconnected objects. What might be obvious in one view, may
257
present nuances in another. One example of this is seen in the
edges of the boxes where the glass ‘data’ appears within dark contour lines. These lines maintain the projected illusion of turning
an object in space. The advantage of this is that the data is made
less abstract as it becomes a consciously sculpted, tangible object
that is then re-abstracted for further interpretation through projection. This re-projection relects a constructive distance from
which the data can be evaluated.
4 Conclusions
This paper describes the artistic development of an analogue projection device for visualising data. The diverse inluences include
equating data collection with the horror vacui, or nature’s abhorrence of a vacuum; an altercation between Salvador Dalí and
Joseph Cornell concerning attribution of the invention of found
footage ilm making; the revival and reinterpretation of magic
lantern projections and pressed molten glass; and the creative
performativity of sculptural data. These coalesce in order to
address contemporary questions of data attribution and generativity in a collaborative interdisciplinary artwork.
In further iterations of the MLHVDP, additional modes for
interpretation of live-generated data may abrogate the horror
vacui. This could mean that the device thereafter can be simply
referred to as a “Magic Lantern Data Projector”. Anticipating that
development will hinge on the effectiveness of the motorised
slide platform and the light source in concert with the artist-written software.
The purpose of the MLHVDP is not only to present a new aesthetic to the portrayal of data, but also to offer potentially productive new techniques to the ields of new media and glass art. This
new combination of modes arose by problematizing and performing a process of projection, obfuscation, and re-interpretation of
data through a collaborative artwork. The resulting dynamic is
intended to rethink staid representation, not only by what is projected, but also by the way each medium is perceived. In the case
of new media art, these results could yield new ways to view data
through analogue projection and sculptural means. This multidimensionality could also serve to expand and diversify the appeal
of glass art and glass slide projection, not merely out of a sense of
nostalgia for manual participation and material speciicity, but as
innovative and content-generative media in their own right.
258
References
Antinori, Fabio Lattanzi, accessed 20 April, 2015, http://
fabiolattanziantinori.com/obelisk.php
Davidson, Michael W., “Molecular Expressions,” accessed 28, April, 2015,
http://micro.magnet.fsu.edu/primer/anatomy/relected.html
Diamond, Sarah, “A tool for collaborative online dialogue:
CODEZEBRAOS” (PhD thesis, University of East London, 2009): 40.
Ettinghausen, Richard, 1979, “The taming of the horror vacui in Islamic
art,” Proceedings of the American Philosophical Society Vol. 123, No. 1
(Feb. 20, 1979): 15-28.
Grant, Edward, Much ado about nothing: Theories of space and vacuum
from the Middle Ages to the scientiic revolution (Cambridge: Cambridge
University Press, 1981): 77.
Hursty, Mark, “Making glass road muqarnas through digital road
process,” International Symposium of Electronic Art (ISEA) Journal (2014).
Hursty, Mark, “Pressed Into Service: Pressing Studio Glass Art in the US,
UK and China,” Glass Art Society Journal (2014): 51-53.
Jarman, Ruth and Gerhardt, Joe, accessed 20 April, 2015, http://
semiconductorilms.com/art/data-projector/
Soloman, Deborah, Utopia parkway: The life and work of Joseph Cornell.
(London: Pimlico, 1997), 87-89.
xCoAx 2015
Computation
Communication
Aesthetics
and X
Glasgow
Scotland
2015.xCoAx.org
ZoOHPraxiscope:
Turning the Overhead
Projector into
a Cinematographic Device
Christian Faubel
Academy of Media Arts, Cologne, Germany
[email protected]
Keywords: Cinematography, Media Archeology, Overhead
Projector, Zoopraxiscope
The ZoOHPraxiscope combines the overhead projector with a
spinning picture-disc and it works similar to the Zoopraxiscope. The Zoopraxiscope is a historical device that was invented
by Eadweard Muybridge to animate sequences of pictures. The
ZoOHPraxiscope allows to combine direct animation and shadow
play with cinematographic animation. Using custom made electronics to control both licker frequency and rotation speed of
picture-discs, it is possible to play with various regimes of animation. As motion and light licker are directly coupled to sound,
the device is also a performance instrument for audio visual
performances.
260
1 Introduction
The ZoOHPraxiscope is a modiied overhead Projector (OHP) and
a re-implementation of a historical device for animating images,
the Zoopraxiscope. The Zoopraxiscope was developed in 1879 by
the photographer Eadweard Muybridge, to project and animate
sequences of pictures showing animals in motion (Hendricks
1975). My purpose of modifying the overhead projector into a cinematographic device is not to create a detailed replica of the Zoopraxiscope, but to fuse two modes of animation in a playful way.
In previous work I have used the overhead projector for creating
shadow plays of moving objects, the irst mode of animation. I use
it for creating audiovisual performances (ray vibration 2015) and
as philosophical toy to convey scientiic insight about the theory
of embodiment (Faubel 2013). The second mode of animation is
based on cinematographic animation similar to the animation
technique of the Zoopraxiscope. With the presented setup these
two modes of animation, shadow play and cinematographic animation can be fused and mixed.
1.1 Early cinematographic devices
Fig. 1 A Zoopraxiscope disc, as it was
used in the Zoopraxiscope developed
by Eadweard Muybridge (source
Wikipedia)
I refer to Eadweard Muybridge’s invention of the Zoopraxiscope as device for displaying animated images, mainly because
the device’s name can be easily changes to ZoOHPraxiscope. I
could have equally referred to Franz von Uchatiu’s Kinetoscope
or Ottomar Anschütz’s Electrotachyscope. All these inventions
and innovations share the technological combination of a picture
disc (see Figure 1.) combined with a Laterna Magica and a shutter
mechanism. All devices appear in a timeframe of only 45 years
between 1845 and 1890 (Füsslin 1993). This era is often designated as pre-cinema, but as Zielinski writes this is a shortsighted
reduction, because the variety of different approaches and of the
motivations that drove the persons involved was too large (Zielinski 1999). What drove people like Muybridge or the chronophotographers Marey and Janssen was not the invention of cinema
but the idea of using photography to reveal a deeper truth about
the world (Canales 2011).
This quest for truth is well demonstrated in the biography of
the photographer Eadweard Muybridge, who had developed an
apparatus to prove that a horse has all the feet off the ground
during gallop (Muybridge 1878). He used an array of photographic
cameras connected to switches triggered by strings attached
across the path the horse would run along. Using this technique,
he succeeded in taking a series of photographs showing a horse
with all feet off the ground during gallop.
261
The Zoopraxiscope is a device he then developed for animating such motion sequences and for showing these sequences to a
broader audience.
1.2 The Overhead projector
Fig. 2 The ZoOHPraxiscope, A
custom build overhead projector with
a high power LED as light source and
a Zoopraxiscope-disc printed on a
transparency
When taken out of the context, where the overhead projector is
known best, the classroom, instead of being boring as we remember it, the overhead projector is surprisingly fresh.
Take for example the performances of the group loud objects
(loud objects 2015), using soldering irons they assemble electronic circuits that produce sound on the overhead projector.
While one follows a shadow play of hands, soldering irons and
smoke, at some point these shadow objects start to emit rhythms
and sound and become loud. An other example are the performances by Klaske Oenema (Oenema 2015), a singer-songwriter
and storyteller. While she sings, she unfolds a story using images
that are scratched or drawn onto found transparent objects and
placed on the overhead projector. I think that even though these
two examples are very different in style, one being loud, noisy and
experimental, the other being silent, harmonic and poetic they
share the same magic, the magic of the Laterna Magica and of the
shadow play.
1.3 LED technology & cinematographic
animation
1 “Overheads on bike”, A project in
collaboration with Tina Tonagel and
Ralf Schreiber, funded by
ON – Neue Musik Koeln e.V.
I have been working on a technological update of the overhead
projector, primarily driven by the wish to reduce its energy consumption in the context of a project for creating a mobile overhead projector1. With the development of high power LEDs it has
become possible to reduce the power consumption by a factor of
ten and still deliver enough light intensity for classroom presentation. But most importantly for this project, it has also become
possible to easily switch the light source on and off at the fraction
of a second. While standard halogen bulbs have a non-negligible
afterglow, a LED may be switched on and off at high frequency.
The complicated mechanism of a ilm projectors shutter may be
realized simply by turning the LED on and off. Combined with a
rotating disc it possible to easily create an animation based on the
principle of the Phenakistoscope.
This new possibility has triggered new interest in pre-cinema
animation techniques and there are quite a number of projects
making use of LEDs for creating animations. A recent popular
example are the animation of three-dimensional objects (Smoot
et. al. 2010, Dickson 2003).
262
There are a number of very good artistic projects that revive
technologies such as Zoetropes, Phenakistoscopes or Zoopraxiscopes. A irst example are the mesmerizing performances by the
group Sculpture (Sculpture 2015), who use pictures-discs in combination with a high-speed shutter camera and a video beamer. A
second example is the installation Kiss-o-scope by artist Amanda
Long (Long 2015). She developed a custom software to render live
streams of camera images into a Phenakistoscope display projected by a beamer. Common to these projects is that they feature
an obsolete technology that has none the less never lost it’s magic.
Combing the overhead projector with a high power LED and a
rotating transparent disc allows to project to a broad audience
but most importantly for combining direct animation and shadow
play with cinematographic animation. This combination of direct
and of cinematographic animation is the innovative contribution
of this project.
2 Turning the overhead projector into a
Zoopraxiscope
To use the overhead projector as a Zoopraxiscope requires to
modify the light for being able to quickly turn it on and off and a
system for creating spinning images.
2.1 Replacing the light bulb with a highpower LED
A standard overhead projector is equipped with a 250 Watt light
system. It uses a 24 Volt Halogen lamp that is operated with a
current of 10 Ampere. To provide such large amounts of current
requires a big transformer that converts the 220 Volt Ac current
into 24 Volt DC current. Because the halogen lamp is an incandescent lamp a lot of heat is produced. As a consequence most
overhead projectors are equipped with an active cooling system.
Because the lamp emits light in all directions a mirror is placed
behind the light bulb to relect the light in direction of the fresnel
lens. In front of the lamp a diffusing lens is mounted that spreads
the light so that the fresnel lens is illuminated homogeneously.
With the introduction of affordable high-power LEDs it has
become possible to greatly simplify the overhead projector and
to drastically decrease its power consumption, while providing
approximately the same light intensity.
To run a 20 Watt LED a constant current provider is needed,
these can be bought off the shelve. A 20 Watt LED that runs at 14
Volt will draw a current of 1.4 Ampere, which can be provided by
very cheap and very small switch-mode power supply. High power
263
LEDs are arrays of LEDs that are mounted on a surface. The light
they emit is directional and covers an angle of approximately 120
degree and no extra lens is needed to homogeneously illuminate
the fresnel lens. High-power LEDs may be passively cooled, it is
suficient to mount them on large cooling block. All in all, replacing the halogen lamp with a LED is very simple and also simpliies the projector. The active cooling fan, the big transformer the
optics with the bulb can be removed and leave ample space for
installing a bigger cooling body with the LED mounted on.
2.2 Controlling the high-power LED
Fig. 3 The bi-core unit, left
potentiometer controls on-time of
the LED, right potentiometer the
off-time.
All standard constant current drivers for LEDs offer a pulse
width modulation (PWM) interface for dimming. This interface is
intended for dimming the LED, by turning it on and off at a high
frequency at which no licker is perceived. But it can operated at
any frequency. In order to create the illusion of continuous motion
from discrete images, the rate has to be at around 18 frames per
second as in standard cinema.
A second parameter besides the frequency at which the LED is
turned on and off is the duration of the on-state of the LED. This
is an important parameter for continuously rotating picture-discs.
In order to have stable image without motion blur the on-time has
to very short. This short on-time corresponds to the thin slits in
the classical zoetrope cylinder. When the slits are too wide the
animated image is blurry (Füsslin 1993). Analogously when the
on-time is too long the image becomes blurry.
Both parameters, frequency and on-time can be controlled with
a very simple analog circuit, the bi-core oscillator (Hasslacher &
Tilden 2002) (see also Figure 3.). While there exist other oscillator
circuits, I use the bi-core, because it can also drive motors. This
offers the possibility to couple the licker frequency with driving
the motor, in order to synchronize both. With the two variable
resistors of the bi-core circuit, the on-time and the off-time can
be controlled independently.
2.3 Driving spinning discs
Fig. 4 The Picture-disc is mounted
on a dc-motor. The motor is ixated
with vacuum cup on the screen.
To spin the picture-discs, I use standard dc-motors with a gearbox
that directly drive the discs The motors are mounted on a holder
with a vacuum cup, so that the motors can be quickly positioned
on the overhead projector. The motors carry an acrylic discs, that
holds the picture-disc printed on a transparency (see Figure 4.).
For controlling the rotation speed, again I use a bi-core oscillator.
For creating a continuous rotation instead of an oscillation the
potentiometer for the right spin is set to zero, so that the motor
264
will only turn to the left. The speed can be controlled by a third
potentiometer that controls the current for driving the motor.
2.4 Spinning discs and lickering lights
Being able to control both the licker of the LED and the rotation of the spinning disc allows to play and search the magical
moment when the animation appears. There is a huge parameter
space in which the illusion of continuous motion is perceived. It
starts at around 12 frames per second and goes up to 40 frames
per second. The number of single images on the picture-disc and
the rotation speed of the disc determine the number of frames per
second. The licker of the LED then needs to be adjusted accordingly, for example, a picture-disc with 16 images that makes one
rotation per second requires 16 lickers per second.
2.5 Combining sound and vision
When used in the shadow play animation mode, the combination of sound and vision is based on directly listening to oscillatory signals that generate movement of a robotic structure,
or to couple these signals to analog synthesizers. This technique of combining sound and vision is described in more detail
in the paper Rhythm Apparatus on Overhead (Faubel 2014).
The same set-up is used for driving the spinning picture-discs.
Using an oscillator circuit to drive the rotation of the picture disc
may seem counter-intuitive at irst sight. But it allows to rhythmisize the continuous rotation of the picture disc. At naked eye
this rhythmic structure is not visible, the motor just seems to
turn. But when the rotation is combined with licker from the LED,
not only the cinematographic animation becomes visible, but also
rhythmic discontinuities in movement of the picture disc.
3 Software scripts for creating
Zoopraxiscope picture-discs
2 http://processing.org
I developed software-scripts to generate Zoopraxiscope picture-discs, using the software framework processing.2 These
scripts generate minimalistic animations of simple shapes, such
as expanding hexagons, rotating and expanding rectangles, or
triangles moving in circles. Figure 5. shows two examples of such
picture-discs. The script to draw the rotating rectangles is shown
below.
265
Fig. 5 Example Picture-Discs
generated with the software. The left
disc is generated by the script below,
when animated it shows rotating
rectangles, the right disc produces
expanding hexagons.
num_frames=36;
void draw()
{
pushMatrix();
translate(width/2, width/2);
rectMode(CENTER);
rotate((-2.0*PI/(num_frames)*ctr++));
for (int j=1; j<15; j++)
for (int i=0; i<num_frames; i++)
{
rect(j*20*sin(i *-2.0*PI/num_frames)
+ 20*sin(i *-2.0*PI/num_frames),
j*20*cos(i*-2.0*PI/num_frames),
5+exp(j/5)*4*sin(2*PI/(num_frames*2)*i),
5+exp(j/3)*2*sin(2*PI/(num_frames*2)*i));
}
popMatrix();
}
Code to generate the rotating and expanding rectangles, expansion is a function of the angular position and the distance from
the center.
4 Playing with direct animation and with
cinematographic animation
For playing with direct animation and cinematographic animation, I use basic shapes such as squares and print sequences of for
example a rotating square on a picture disc with a single square at
the center (see Figure 6. for an example). These basic shapes lend
very well for mixing both animation styles. When used in in direct
animation the center square rotates back and forth or appears
as circle at high rotation speeds. In this mode what is special is
that the signals that drive the motors are also used to generate
sound. As matter of fact sound and movement are always in sync.
Because of the rotation, the outer patterns smear out and are just
266
perceived as some texture. When licker is turned on suddenly the
outer shapes appear and are perceived as rotating. As the discs are
turning at different speeds the clear perception of animation does
not appear at the same time. Performing with this setup is really
about playing with perception (see Video in linked in Figure 6.).
Fig. 6 A video showing an example
of mixing direct animation with
cinematographic animation
(https://vimeo.com/129420374)
5 Conclusion & Outlook
I have presented a setup that allows the fusion of shadow play
and cinematographic animation. It is based on a simple and cheap
modiication of the overhead projector. It is fun to play with these
different modes of animation and tuning in and out of animation
and licker. Even for the cinematographic animation it is possible
to drive the discs rhythmically. At the naked eye the discs seem to
be rotating continuously, when licker is turned on the animation
seems to stop rhythmically. As the signals driving the rhythms
are connected to sound, even for the animation the sound is in
sync with these rhythmical stops of the animation.
As the modiication of the overhead projector is really easy and
cheap, I plan to develop a workshop for modifying the overhead
projector. In this workshop participants will learn very basic skills
on electronics but also in a second part they will start experimenting with animation. While it is handy to use a software to
print out picture-discs of animations, it is also much fun to work
on animation directly by, for example, drawing simple shapes by
hand. The key element of the workshop would be based on this
tangibility of animation.
References
Barber, S. Muybridge: The Eye in Motion. Solar Books, 2012.
267
Canales, J. Desired machines: Cinema and the world in its own image.
Science in context 24, 03 (2011), 329–359.
Dickson, S. A three-dimensional zoetrope of the calabi-yau cross-section
in cp4. Leonardo 36, 3 (2003), 230–232.
Faubel, C. Rhythm Apparatus for the overhead projector – a
metaphorical device. In Proceedings of xCoAx 2013, Conference on
Computation, Communication, Aesthetics and X (2013), M. Verdicchio
and M. Carvalhais, Eds.
Faubel, C. Rhythm Apparatus on Overhead, International Conference
on New Interfaces for Musical Expression, Goldsmiths, University of
London; (2014)
Füsslin, G. Optisches Spielzeug: oder wie die Bilden laufen lernten. Verlag
Georg Füsslin, (1993).
Hasslacher, B., and Tilden, M. Living machines. Robotics and
Autonomous Systems 15, 1 (1995), 143–169.
Hendricks, G. Eadweard Muybridge: the father of the motion picture.
Grossman Publishers, (1975).
Long, A. http://www.amandalong.org/kiss-o-scope.
loud objects. http://www.loudobjects.com/.
Muybridge, E. Descriptive Zoopraxography, or the Science of Animal
Locomotion Made Popular. Library of Alexandria, (1893).
Oenema, K. http://www.klaskeoenema.nl/.
Sculpture. http://tapebox.co.uk/.
Smoot, L., Bassett, K., Hart, S., Burman, D., and Romrell, A. An
interactive zoetrope for the animation of solid igurines and
holographic projections. In ACM SIGGRAPH 2010 Emerging
Technologies (2010), ACM, p. 6.
Zielinski, S. Audiovisions: Cinema and television as entr’actes in. History.
Amsterdam: Amsterdam University Press (1999).
Short-Papers
Accidental Aesthetics:
Philosophies
of the Artiicial
xCoAx 2015
Computation
Communication
Aesthetics
and X
Glasgow
Scotland
2015.xCoAx.org
Nicole Koltick
Drexel University
[email protected]
Keywords: artiicial aesthetics, philosophy, speculative realism
This paper will examine a range of philosophies surrounding
aesthetics and begin to speculate on a metaphysical framework
surrounding artiicial aesthetics. Tracing earlier arguments from
Hegel and Kant and extracting signiicant developments in newer
variants of speculative realist philosophies, this paper seeks to
critically engage the realm of aesthetics and computation from
a metaphysical viewpoint. These metaphysics touch on issues of
non-human agency, inter object relations, and aesthetic theory in
relation to computational entities and autonomous systems. The
ability of these systems to operate outside of human cognitive
limitations including thought patterns and constructions which
may preclude alternative aesthetic outcomes, afford them in some
ways limitless potential in relation to aesthetics. Aesthetics here
are not narrowly constrained by a human ability to recognize or
appreciate these outputs. The designation of the accidental or
provisional is utilized as an alternative approach to the production and assessment of aesthetic occurrences.
270
1 Introduction
An exploration into accidental aesthetics posits that outcomes,
products, thoughts and recognitions of the aesthetic are related
to an unfolding and singular relation or encounter which is not
expected whether in behavior, form, affect or outcome. This is in
line with existing conceptions of human aesthetic breakthroughs
requiring novelty in some regard whether it occurs through
medium, point of view or technique. We can designate that this
is a persistent feature of the realism of the present moment. In
a certain sense every unfolding encounter could be described as
accidental, namely that there exist probabilities in relation to
effects from past causes but not absolute certainty as to the exact
effects. My assertion here of the pervasiveness of the accidental as
an underlying feature of the aesthetic stands in opposition to the
more commonplace view of the term accidental as a throw away
or pejorative designation. Here it is interpreted as a desirable and
affective feature, one that is both ubiquitous and yet under examined philosophically. The accidental alludes to perceptions, interactions, causes and effects not entirely premeditated or conceived,
nevertheless yielding effects both discernable and registered. This
would apply to both human and non-human instances. Imagining
the potential for a drastically diverse range of aesthetic instances
and affective capacities will provide us with an expanded concept
of the potentials for artiicial entities in both form and behavior.
2 The Poetics of Conventional Aesthetics
The vantage point of the accidental stands in contradiction to
outdated ideas that the aesthetic resides in a distinctly human
approach which can be seen throughout historical philosophical focuses on aesthetics. The aesthetic as a term and an area of
philosophical inquiry has posed signiicant challenges due to the
elusive nature of capturing and locating the aesthetic. Hegel in
his Lectures on the Introduction of Aesthetics in the 1820’s recognized that, “a study of this kind becomes wearisome on account
of its indeiniteness and emptiness and disagreeable by its concentration on tiny subjective peculiarities” (Hegel, Knox transl.,
1979). This indeiniteness and emptiness can be identiied as a
pertinent feature of the aesthetic. When we are dislodged from
our default mode of interpretation and cognition, when the present moment unfolds with unexpected variability, a disruption of
our cognitive expectations occurs and we experience a sort of
indeiniteness. This disruption and its affective capacity can be
predicated in one’s own aesthetic encounters with any number of
phenomena which may then be translated into aesthetic products
271
or simply remain in a singular aesthetic experience with oneself.
The question then becomes, can artiicial systems embody indefiniteness? This question could return to the sensual realm the
artiicial embodies. Autonomous systems and artiicial entities
have continuously evolving inputs be they informational or physical and they are capable of registering each new composition of
sensory inputs as unique and singular encounters. The structuring and legibility of this registration is highly variable and could
be expressed through generation of an aesthetic activity, output,
artefact or relation. The way these entities register disruptions
when encountering something novel and the outputs they may
enact in response is an area that warrants greater metaphysical
attention in relation to aesthetics.
The one consistent feature in discussions of the aesthetic
involves the presence of an aesthetic void which eludes precise description or location, both cognitively and materially.
The advent of computational processes calls into question this
unnamed process which has been referred by numerous evocative yet vague adjectives and nouns including cloudy, the essence,
the rift, the remainder, etc. It is clear that aesthetics pose significant challenges in delimiting and describing what exactly they
are. Steven Shaviro discusses Kant’s statement that there is, “no
science of the beautiful” (2009). The aesthetic realm has traditionally been understood to arise out of such mysterious workings
in addition to summoning contemplation or recognition of such
mysteries through an affectual quality. The aesthetic process
and its related affects cannot be located to one key mechanism
whether physiological or material. It eludes speciic deinition and
resides alongside other mysterious and opaque processes relating
to emergent phenomenon in human and nonhuman complex systems. This aesthetic void removes itself from direct contemplation or description and is a persistently fuzzy and elusive entity.
Examining approaches to translation, metaphor and symbols are
often helpful as they also coincide with considering how the realm
of the aesthetic meets the binary.
3 Non-human Aesthetics
In order to move from a traditional approach to aesthetics which
hinges on human subjectivity and issues of taste, and discernment, an examination of current approaches to non-human aesthetics provides a potential way forward. There have been several recent works that attempt to reconcile non-human aesthetics.
Recent inluential work includes David Rothenberg’s compelling
book, Survival of the Beautiful which locates beauty as a fundamental part of evolutionary processes and discusses non-human
272
aesthetics in a compelling manner (2011). Recently Tom Sparrow
has put forward a compelling argument that we are at the end of
phenomenology charging that it is, “no longer apparent how phenomenology is to be carried out or how it differs from, say, thick
empirical description or poetic embellishment” (2014). Phenomenology has concerned itself with the sensual realm and has frequent overlap with the aesthetic. Poetic embellishment is often
a symptom of this work. When faced with this gap (rift, chasm,
unknown, the remainder…) poetics and their affective quality
act as an intermediary plane of communication. In their affective
abilities they utilize this not quite here, not quite there, dislocation. Poetics belonging to the aesthetic realm allow us to probe
and hint at the sense we may gather from the “real” but cannot be
described or located in any speciic way. The ability to transport,
disrupt and point attention to a dislocation from established patterns, identities and constructions aligns with my conception of
the accidental as a fundamental feature of all aesthetic phenomena recognizable or not. Therefore, although the phenomenological method in its insistence on the subject/object distinction is
admittedly lawed, the phenomenological realm, that of sensation
still has much to offer in our contemplation of this void. In their
affective communications, poetics and other aesthetic communications may rub up against and glimpse the “real” much more
accurately than metaphysical descriptions.
There is something to be discovered through deploying phenomenological methods to speculate on computational embodiments. This would include thinking about how these entities see,
feel and comprehend the world through a variety of hardware and
software including advanced sensing capabilities at extreme scalar ranges eluding human perception. In addition there is a staggering variety in the way these systems could eventually operate
in terms of both input and output capacities. Ian Bogost’s book,
Alien Phenomenology puts forth a compelling account of how various machines and devices “see” (2012). This sort of phenomenological approach is not meant to be an anthropocentric reading
of how machines will be like “us” but rather a means to speculate
on the variety of ways they will be quite different. Their potential
for a more diverse range of outcomes could present us with new
understandings of what embodiment looks like from radically
diverse points of reference. This in turn hints at new potential
aesthetic outcomes. It is only when we limit our phenomenology
to human embodiment do we close off any potential access or
insight into artiicial aesthetics.
273
4 Speculative Aesthetics
A speculative realist philosophy is well suited to contemplating
aesthetics of the artiicial. By operating outside of the traditional
anthropocentric lens, these philosophies are primarily interested
in examining what lies outside of our traditional perceptions and
assumptions. The endless proliferation of objects or things is
a main focus of Tristan Garcia’s Form and Object. He states the
problem at hand:
…there are more and more things. It is increasingly dificult to comprehend them, to be supplementary to them, or to add oneself to oneself
at each moment, in each place, amidst people, physical, natural, and
artefactual objects, parts of objects, images, qualities, bundles of data,
information, words, and ideas – in short, to admit this feeling without
suffering from it. (2014)
As more and more things are connected and networked the
number of instances, objects and thoughts that can arise in relation to these multiply and intensify. Our ability to name, identify and verbalize these becomes tricky. How many phenomena
do we even have words for? The aesthetic develops, accentuates
and manufactures its own set of unique relationships between
its internal elements, its external relations and any phenomena
it invokes or brings into being. These remain in the gap. Hard to
describe and name, yet real in every sense. Timothy Morton in
Realist Magic, describes one particular type of disruption in perception through the experience of jet lag: “… things are strangely
familiar and familiarly strange – uncanny. Then it hits you: this
is the default state of affairs, not the world in which regularly
functioning things seem to subtend their aesthetic effects…The
smooth world is the illusion! The clown-like weirdness of the
uncanny situation you ind yourself in…, is the reality” (2013).
The presence of the uncanny is one speciic type of aesthetic
encounter which announces itself without any direct intention.
From a speculative realist point of view any so called designation of realism itself is irrational and uncategorized. Autonomous
systems instead of being modelled after our views, aspirations,
goals or “feelings” could instead operate from a deliberate stance
of irrationality. Novelty is a distinguishing feature of my argument of the accidental. In this sense an artiicial system seems
primed to substantially contribute to aesthetic production. Once
we begin to formulate that interactions however slight are a part of
the aesthetic dimension we can begin to imagine new approaches
to aesthetics and affective instances through the production of
novelty through inducing any number of relations or interactions.
274
Morton devotes substantial attention to examining relations
between objects and he asserts that any means by which we
perceive and access other entities (objects) through sight, touch,
sound, thoughts etc. are all fundamental to reality. There is a particularly compelling argument he makes in regards to aesthetics,
stating, “It might be better to think of a transfer of information – it
might be better to think that causality is an aesthetic process”
(2013). If we take aesthetics to be a fundamental feature of reality
and intimately bound with causality (Morton, 2013) then computational systems are just as capable if not more, at accessing the
“real”. The lat ontological designation he assigns to information,
intimates that data has a particularly unique role in that it can
manufacture and enable the proliferation of novel interactions
between any manner of entities both real and imagined. In this
way computational or artiicial approaches may operate around
the aesthetic in less mediated and by extension more accidental ways. So a computation that engages irrationality, that is not
seeking to mimic or please but rather one which is looking for and
is capable of generating novelty in interpretation, representation
and translation may produce far superior aesthetic encounters.
Hegel stated that, “Art has at its command not only the whole
wealth of natural formations in their manifold and variegated
appearance; but in addition the creative imagination has power
to launch out beyond them inexhaustibly in production of its own”
(1979). The computationally creative imagination has the power
to launch inexhaustibly beyond. Most human aesthetic production involves the recognition, selection, iltering and re-presentation of phenomena. Computational entities are also capable of
these tasks and can be thought of as possessing more of an inclination towards the accidental rather than less. The potential for
these systems to surprise us and present us with novel results is
incredibly underappreciated.
The implication that chance or randomness is entwined with
creativity is not a new insight. Hoffstadter in Gödel, Escher, Bach,
explained, “it is a common notion that randomness is an indispensable ingredient of creative acts. This may be true, but it does
not have any bearing on the mechanizability – or rather programmability! – of creativity” (1979). But conventional designations
of the aesthetic and by association creativity rely on an observer.
The human is able to recognize, appreciate and locate aesthetic
qualities and outcomes and even program these capabilities artiicially. But these activities have still been interpreted in fairly
conventional terms. A new metaphysical approach to aesthetics
seeks to step outside of the rift or gap that eludes description.
Rather than any sort of clear distinction or description, a focus
instead on the pervasiveness of the accidental as a fundamental
275
feature of reality allows us to begin to reformulate our conceptions of artiicial aesthetics and instead look towards the ability
to generate a multiplicity of novel interactions of varying spatiotemporal speciicities.
Speculating upon aesthetics is but one approach by which we
may engage future computational ecologies. Their speeds, speciicities and interactions could easily be unrecognizable to us.
Their rapidly proliferating complexity produces an opacity in
relation to exact processes or methods of generating information
and relations. The accidental or provisional should not preclude
us from recognizing the vast potential these systems have for
generating novel relations. The expectation of complete comprehension is not in place for the variety of other numerous entities
we interact with daily, or even ourselves for that matter. Opacity is a persistent feature of our experiences. We might begin by
acknowledging that our current approaches to aesthetics whether
through metaphysical analysis or creative practice may be highly
limiting. Computational systems, with their ever expanding abilities, relationships and entanglements may offer untold potentials
to affect and be affected in unrecognizable, accidental and yet
highly aesthetic ways. By reframing the ways in which we designate, produce and assess the aesthetic we can begin to engage the
synthetic, the accidental and the computational in wholly novel
ways both philosophically and creatively.
References
Bogost, Ian. Alien Phenomenology, Or, What It’s like to Be a Thing.
Minneapolis: University of Minnesota Press, 2012.
Garcia, Tristan, and Mark Allan Ohm. Form and Object: A Treatise on
Things. Edinburgh: Edinburgh University Press, 2014. 1.
Hegel, Georg Wilhelm Friedrich, and T. M. Knox. Hegel’s Introduction
to Aesthetics: Being the Introduction to the Berlin Aesthetics Lectures of the
1820s. Oxford: Clarendon Press ;, 1979. 5, 25, 33.
Hofstadter, Douglas R. Gödel, Escher, Bach: An Eternal Golden Braid. New
York: Basic Books, 1979, 863.
Morton, Timothy. Realist Magic: Objects, Ontology, Causality. Ann Arbor,
Mich.: Open Humanities Press, 2013. 65. 71.
Rothenberg, David. Survival of the Beautiful: Art, Science, and Evolution.
New York: Bloomsbury Press, 2011.
Shaviro, Steven. Without Criteria Kant, Whitehead, Deleuze, and
Aesthetics. Cambridge, Mass.: MIT Press, 2009. 1.
Sparrow, Tom. The End of Phenomenology: Metaphysics and the New
Realism. Edinburgh: Edinburgh University Press, 2014, 8.
xCoAx 2015
Computation
Communication
Aesthetics
and X
Training Performing
Artists in the Digital Age:
The Performance and
Interactive Media Arts
Program as a Model
Helen E. Richardson
Glasgow
Scotland
2015.xCoAx.org
Brooklyn College, Brooklyn, NY, USA
[email protected]
Keywords: PIMA, performance, interactive media arts, MFA
training, arts collaboration, digital media, interdisciplinary
arts, art and social engagement, Brooklyn College, Guy Debord,
dérive.
The paper examines the changing culture of the arts in the digital
age as the parameters of the artist expand, demanding diverse
skills, lexibility, and an entrepreneurial outlook, focusing on how
that impacts the training of the artist in an increasingly interdisciplinary, collaborative, technological, socially engaged environment, including a model for training the interdisciplinary performance artist employing digital media and engaging in community
collaborations, the Performance and Interactive Media Arts Program at Brooklyn College. http://wp.pima-mfa.info
277
The parameters for the artist in the digital age are expanding rapidly as media becomes an important part of the artist’s canvas
and an intermediary in the artist’s communication with the audience. This opening of parameters is not only due to the presence
of computer generated art, which allows for a myriad of interactive environmental experiences, but also due to the reconiguration of what performance and art signify in our time. Art has gone
through many articulations throughout history from being part
of the social culture in the form of ritual or religious expression to
the cult of the superstar artist provided with extensive access to
the public via private, commercial, or state sponsored patronage.
With the increasing accumulation of individual wealth in society and the establishment of private patronage and investment in
the arts, the artist’s appeal has been tied to a network of patrons
and the potential of their art to accrue value in the marketplace because of its unique qualities that are valued by the arts
establishment.
Students of the arts have sought instruction at schools or with
established artists that they hope will serve as an entreé to the
art establishment, with the hope that this training will develop
their unique talents and lead to the recognition of their particular genius. Museums, theaters and concert halls have become
star makers offering certain artist ongoing public exposure. Artists create art speciically for galleries, large theatre spaces, and
the concert hall, in order to become enshrined in the private and
public sphere as relections of contemporary tastes and values,
whether as purveyors of the status quo or manifestations of radical chique.
Film, still and moving, has offered the possibility of greater
exposure to a larger audience, allowing art to become ubiquitous
through ininite reproduction, making it available to as many
people as the market will bear, accelerating the migration of
the arts into the populist sphere, where art has become entertainment, evoking the pleasure associated with both beauty and
shock value. Filmmaking and theatrical enterprises put emphasis on the collaborative process between diverse artists, which
encourages cooperation and specialization. In the sixties, with
the introduction of portable video cameras and Super 8, ilmmaking became accessible to the general public. As well, mixed–
media experimentations and group happenings of Dada and other
modernist movements in the early 20th century, along with the
ideas of immersive theatre proposed by Antonin Artaud, began
to change the focus of art from the notion of the singular genius
artist, prevalent between the Renaissance and Romantic eras, to
the artist who bypassed the patron in order to experiment and
278
interact directly with the public, inding their genius and support
within group endeavors.
For a time both the genius artist and the collective arts groups
have cohabited, but as technique becomes more accessible to the
masses and innovation can happen in any corner of the world,
individual expertise has become reliant on group input and the
internet, a speedy conduit for information sharing, has facilitated
networking. Photography, which relies on technology and the
ability to edit and construct, rather than skills built up over years
of drawing and work with the brush or chisel, has usurped the
classical emphasis on portraiture and landscape, freeing the artist to mix techniques and themes with the emphasis now on conceptual originality. The same is true for theatre makers, or storytellers, who, faced with the impossibility of creating yet another
original story, or competing with the impact of documentary
ilms, are now focused on telling stories in a unique way. This new
original approach is often facilitated by group endeavors and the
television and ilm world rely on a stable of writers, with diverse
abilities, to turn out a script. Art students now explore a plethora
of techniques through the various offerings possible in university
settings, while the critical response, both in the classroom and
in the art world, focuses on the ideas behind the work, with technique becoming more and more taken for granted as a facilitator
of concept.
With the introduction of digital techniques and the interactivity of the internet and the growing availability of instruments of
creation to the masses, the emphasis on genius as a property of
the few has changed into the idea that everyone has the possibility of expressing their own particular genius. As well the dramatic
change from a male to a female dominated educational culture,
beginning particularly in the earlier grades, has changed the value
system of youth from a competitive individualistic approach to a
culture increasingly geared towards collaboration, sharing, and
enabling in which the classroom becomes less and less stratiied
while the teacher becomes a facilitator rather than an authority
igure. This has had a tremendous effect on the interests and
preferences of younger artists who in general no longer carry
the image of the romantic isolated artist but rather the socially
engaged art maker working in collaboration with their community and facilitating the creative impulses of others.
The digital age and the internet has created a forum that promotes open access, which puts it in a dialectic with established
institutions promoting art that is juried by “experts” and insiders.
Though Museums and performance venues have expanded their
public spaces and in some cases have become cultural shopping
malls – one can walk through the MoMA and see a performance or
279
two as one goes up the staircase to view a myriad of galleries and
special entry exhibits, as well as checking out three or four different bookstores, selling all sorts of luxury items, and eat at several
different areas within the museums – these museums can only
serve a fraction of the working artists seeking public exposure. A
redeinition of art venues and communities and an expansion of
arts interactions with the audience has provided the artist with
the possibility of developing a career outside the establishment.
And the internet, with its vast access to the public, has greatly
facilitated this. Young artists augment their access to audiences
through inclusionary tactics from social media to collaboration with other arts groups. As well young audiences often ind
non-traditional venues friendlier, less elitist, and more socially
enabling, which are also more cost effective for the artists.
France, where there is a tradition of the week-end artist, and
the informed amateur, has been instrumental in expanding the
notion of the artist, the audience, and the venue. The poet Baudelaire articulated the idea of the lâneur, the artist-spectator of the
modern urban landscape, who leaves the isolation of the studio
and “enters into the crowd as if it were an immense reservoir of
electrical energy” (Baudelaire, 9) strolling through the city with
an aesthetic pleasure, passionately engaged, yet incognito. The
lâneur, is both reader and artist as his/her observations determine the art. This notion evolved with the Dadaists and Surrealists who took a more active approach using chance to establish a
relationship with the street by following certain arbitrary signposts. Marcel Duchamp turned the låneur into an establishment
artist by bringing found objects of the urban landscape into the
museum. The Situationists took the Flâneur a step further by suggesting a conscientious analysis of urban geography and its distinct attractions (Debord). The artist no longer needed to access
a traditional venue, they could now use the urban landscape as
their studio and stage.
This opening up of artistic practices, pulling the artist out of
his private preoccupations into social engagement, collaborative
endeavors, and expansion of techniques, has taken over a century.
The ubiquitous urbanization of the landscape, the easy access to a
virtual global crowd, 24-7, and the plethora of accessible creative
tools has changed the artistic landscape irrevocably. The question becomes how to prepare the emerging artist for this new territory. Most models of arts education still focus on the creation of
the individual genius, providing an education that banks on the
artist succeeding through a unique aesthetic achievement that
sets them above the rest, though the odds are remote for most
artists, even those of exceptional talent. The market place is volatile and the public interest quick to change. In a world full of
280
entrepreneurial artists, those who focus on their own particular
genius, in isolation, will generally stay there. Today the working
artist is in general lexible, crosses styles and disciplines, spends
as much time networking as creating, and often has an easier
time getting funding for collaborative, interdisciplinary work,
integrating technology, and including social engagement. Using
social media and building an image through industry standard
promotion is also important. Most importantly is the ability to
conceptualize how these diverse potentialities can come together
to articulate a single vision. However there are few arts education
programs that offer a training that encourages all these qualities.
Ten years ago several Brooklyn College faculty began to imagine a graduate program, Performance and Interactive Media Arts
(PIMA http://wp.pima-mfa.info), based around interdisciplinary
practices in which collaboration, performance, and interactive
media would be a core part of the training. It brings together
theater makers, musicians, dancers, sound artists, visual artists, software programmers, poets, etc.. All projects in PIMA are
created collaboratively. In the irst semester, students are given
general assignments, with a loose set of objectives to provide a
focus towards creating weekly collaboratively generated performance projects, approximately ten minutes in length, as well as
longer end of semester projects. Collaborative group members
rotate weekly in order for class members to work with everyone
in different combinations. Feed-back is ideally conducted under
a framework developed by dancer Liz Lehrman, in the 1990s, in
which the point of the feed-back is to support the goals of the
group and avoid critical responses that impose an external vision.
In the second semester projects become semester long endeavors
and are usually performed off campus.
PIMA students are introduced to Max/MSP software in their
irst semester, integrating the technology into their weekly creative work. During the course of study, the training seeks to introduce various softwares and digital tools such Arduino, Adobe
Creative Suite, Protools, Projection Mapping, Isadora, Processing,
Abelton, and Audacity, with the goal of giving students a large
digital palette from which to work. As well, everyone is required
to do physical training towards attaining performance skills, with
an emphasis on Viewpoints, a method that fosters collaborative
dynamics and an awareness of the demands of composition and
dramaturgy as part of the creative process. PIMA students are
expected to be ready to be of service to each other in setting up
projects for viewing and take-down. A spirit of colleagueship and
mutual support is necessary for the successful realization of the
program. A core course, often pivotal in terms of a PIMA student’s evolution within the program, is the PIMA course on social
281
engagement in the second semester, in which students work with
communities outside the college with the goal of creating an
event that relects an authentic collaboration between community and artist. PIMA training includes a knowledge of the contemporary history of performance and theories associated with
a deeper understanding of 21st century performance techniques,
as well as effective practices in creating collaborative community
actions, and theory often informs concept and process. There are
also courses in pedagogy, as well as self-producing.
The year long thesis in the second year must be done off campus
and the students take responsibility for all aspects from creation
to inding a venue, fundraising, enlisting outside collaborators as
necessary, and publicity. The collaborative approach to creating
performance breaks with many of the expectations set up by the
professional theater, which include deined roles, specialization,
and individual credit for work done. Having no designated director
to navigate the creative process, each member of a PIMA cohort
is expected to take on the responsibilities of that role, sharing in
the leadership of a project. Hierarchy is avoided and lexibility is
encouraged. Everyone takes on the responsibilities of conceptualization and realization, including the roles of producer, designer,
and performer. One of the challenges of a collaborative process is
the continual communication demanded of its participants and
technology is enlisted in facilitating discussion, with online conferences and idea sharing.
In this structure, individual ownership of ideas is harder
to establish as discussions are not about who did what but
more focused on the how and why of the project content. This
free-wheeling creative process translates into events where participants and audience interact directly, as well as through technology, including using cell phones in various creative ways or
triggering interactive sound and video installations. Immersive
actions or Situationist dérive encounters with the urban landscape are also incorporated providing controlled and spontaneous
audience participation.
As to professionalization and the job market, there is no speciic career expectation in the program, knowing that the artist of
the future will have to be lexible as the expectations and interests of a new generation of spectators change at an increasingly
rapid rate: social, aesthetic, conceptual, producing, pedagogical,
and technological skills, provide the graduates the ability to enter
a variety of professional activities in the arts from performance
making and digital design to producing, curating, scholarship,
social engagement, and teaching.
282
References
Baudelaire, Charles, The Painter of Modern Life and Other Essays, trans.
Jonathan Mayne (London: Phaidon, 1995)
Debord, Guy, “Theory of the Dérive,” in Les Lèvres Nues #9 (November
1956) reprinted in Internationale Situationniste #2 (December 1958)
http://www.cddc.vt.edu/sionline/si/theory.html
Fluid Control:
Media Evolution in Water
xCoAx 2015
Computation
Communication
Aesthetics
and X
Glasgow
Scotland
2015.xCoAx.org
Christoph Theiler
wechselstrom (artist group) Vienna, Austria
[email protected]
Renate Pittrof
wechselstrom (artist group) Vienna, Austria
[email protected]
Keywords: controller, computer interface, water, electronic
music, video, mass inertia, luid, potentiometer, switch, fader
We have developed water based electronic elements which we
built into electric circuits to control different parameters of electronic sound and video tools. As a result of our research we have
constructed a complex controller whose main component is water.
This tool makes it possible to control analog and software synthesizers as well as video software and other electronic devices, especially microcontroller based platforms like Arduino or Raspberry.
284
Introduction
Many traditional music instruments such as violins, guitars, timpani, pianos, and trumpets can give the musicians an immediate tactile response to their play. A strike on the timpani makes
the mallets bounce back in a very speciic manner, depending on
the velocity, intensity, point, and angle of the beat. Plucking a
guitar string, bowing a violin, sounding a trumpet or pushing a
key on the piano not only requires overcoming a resistance but it
also produces a kickback. On a piano for example, this kickback
consists of the hammer falling back, an effect which the musician, upon touching the keys, can feel directly in his ingers. The
nature and strength of this kickback response depend on both,
the type of the action (plugging, beating, blowing, striking), and
the strength, the sound quality, the pitch.
In electronic music the tactile feeling of the generated sound
is absent. We cannot grab into the electric power and inluence
the sound quality with our hands in a direct manner. We cannot
feel the swinging of an oscillating electric circuit consisting of
transistors, resistors, and capacitors. Musicians have to play electronic instruments always in an indirect manner via interfaces.
These days the development of many industrially produced
interfaces tends to avoid mechanical components as much as possible or to use only a minimum of mechanical parts. This leads
to the fact that the input devices themselves do not create any
music adequate resistance against the musician’s acting. Moving
a fader or potentiometer from point zero up to half (50%) requires
the same force as moving it from half to the top (100%). If this
tool is used to inluence the volume or the amount of distortion
of a sound, one would wish for a fader whose sanding resistance
increases according to the distance. Certain attempts have been
made at inding a solution but the results have not yet gone
beyond the status of a dummy, i.e. they are not actually included
in the work circle of the sound production.
The best known example of such a development are the
weighted keys of a keyboard. They are supposed to imitate the
feel of a traditional piano but are not actually linked to the sound
production. However, these particularities of the electronic sound
generation do not imply a lack because the listener is rewarded
with an immense amount of sound possibilities, a wealth that
hardly exists in music produced with traditional instruments. On
the other hand we have to admit that these particularities clearly
inluence the aesthetic perception of the work. Especially in the
beginning of electronic music people used to describe the sound
as very mechanical.
285
Fluid Control
Fig. 1
The artist group “wechselstrom” has made an attempt to develop
the potential: A irst approach consisted of producing the movement of sounds in space with an interface that gives the musician
a physically tangible reference to his actions. These movements
are normally regulated with a pan knob or a joystick. We equipped
the interior of a closable plastic box with metal wires that took
over the function of inputs and outputs of a mixer. These wires
were isolated from each other, i.e. they hung free-loating inside
the plastic box (Fig. 1).
The moment when the box was illed with (tap) water a complex
structure of potentiometers was created mutually inluencing
each other. The wires took over the function of electrodes and the
water served as a variable resistor. Measurements showed that the
electrical resistance between two electrodes was between 15 – 50
kohms, depending on the immersion depth and the degree of wetting. These values are also used in normal potentiometers in electric circuits.
We have called this new instrument the “Fluid Control” box. It
has been our goal to use Fluid Control as a matrix mixer which
combines the functions of controllers, switches, faders, panning regulators, and joysticks in one hand. The movement of the
water inside the box, the sloshing of the liquid reveals not just an
audible image of the movement of sounds in space. Furthermore,
the player / musician can bring his own body into a tactile relationship with the shifting weight of the water. The body and the
instrument can now get into a resonant interaction. This process
is similar to the rhythms of a sand- or rice-illed egg shaker which
sound most lively when one succeeds to synchronize the movement of the grains with the swinging movements of the hand and
arm.
In summer 2012 (during the festival Sound Barrier) we set up
two Fluid Control boxes, two CD players, which resulted in a total
of four mono tracks, and a 4-channel sound system. The four
mono tracks coming from two CD players were launched into the
input side of the irst Fluid Control box mixed together with the
appropriate proportion of water and sound levels on two tracks.
This mixture was fed into the second Fluid Control box and distributed dynamically to the four channels of the sound system
(Fig. 2).
Following the golden rule “current is current is current” the
next step was to modulate not only audio signals but also to modulate control voltages generated in analog synthesizers. These
electronic devices have the advantage of providing multiple
physical inputs and outputs that can be plugged in directly. We
286
Fig. 2
showed this second setting for the irst time on Sept 15th 2012 in
the Jazzschmiede in Düsseldorf. We used the possibilities offered
by Fluid Control for inluencing the control current that was produced by an analog sequencer in order to drive an analog synthesizer (Fig. 3).
Fig. 3
Fig. 4
As a result of our research we have created a tool which makes
it possible to control electronic sounds within the dispositive of
preselected sequencer and synthesizer setups in a very fast, dizzy,
sophisticated, and sometimes chaotic way. Developing this tool
we intended to make the change of the sound parameters in electronic music physically tangible. We also wanted to give the player
a resistor / a weight into his hand which enables him to react in
a more immediate and body conscious way to changes in sound
beyond the scope of what controllers and interfaces like buttons,
faders, rotary potentiometers, and touch screens can do.
As a the third we wanted to bring Fluid Control into the sphere
of the digital wold of computers, software synthesizers and, as
a follow up, of video or any other multimedia software. All wellknown software synthesizers like MAX, pd, Reaktor etc. and most
video/graphic software (MAX/jitter, Resolume) use and understand MIDI speciication to control various parameters. We used
a MIDI box which provided MIDI inputs and outputs and was connected via USB or FireWire to the computer on the other side at the
same time. For the creation of a reliable MIDI data stream we took
287
the +5 volt CV (Control Voltage) speciication as an equivalent for
the midi data value 0…127. We generated the corresponding data
stream via a CV-to-MIDI converter. We modiied the control voltage, which is often constructed with a single potentiometer, by
adding the Fluid Control Box and by building it pre-, and/or postfader or as a side channel into the electric circuit
(Fig. 4).
Fig. 5
Fig. 6
Fig. 7
“In1” and “In4” (socket symbol with arrow) are sockets with
switching contacts, all other sockets are without switch. R1 is a
resistor preventing a short circuit when sockets are connected in
a wrong way (e.g. if you connect In1 to In6). The out goes to the
input of one of the 16 channels provided by the CV-to-MIDI converter, which means that this circuit diagram was built 16 times
(Fig. 5).
Connections can be made between all sockets, even between
sockets of different channels. However, only the following connections produce an effect: In1-In2, In1-In5, In2-In5, In3-In4,
In3-In5, In4-In5 and In5-In6.
Fig.6, 7, and 8 show the basic connections. In Fig.6 two Fluid
Control boxes are looped in. Together with R2 they build a voltage
devider. When the slider of R2 is in the upper position the irst
Fluid Control box has more inluence than box nr.2 and vice versa.
When for instance the second box is plugged out the remaining
box achieves the highest effect with the slider of R2 being in the
upper position. When the slider is in the down position the box is
inactive because the slider is connected to ground, therefore the
output voltage is zero. In Fig.7 and Fig.8 the box achieves its highest eficiency when the slider is in the center position.
Obviously, Fluid Control can be connected to any microcontroller or computer. In this case a MIDI-translation is not necessary,
the circuits shown in Fig.4 – Fig.8 can be directly plugged into the
analog inputs of the Arduino or Raspberry.
Film clips illustrating the operation of this instrument are
available under the following Internet links:
How it works: (search for “Fluid Control Essenz”) https://www.
youtube.com/watch?v=ed4JlMMNnyg and “Fluid Control – he
Installation” https://www.youtube.com/watch?v=41uZi7bEdeI
wechselstrom
Christoph Theiler & Renate Pittrof
Fig. 8
“wechselstrom” is a label owned by Renate Pittroff and Christoph
Theiler. Based in Vienna, “wechselstrom” runs a so-called “offspace”, which offers room for exhibitions, media activism and all
art forms on the fringe of culture.
288
Selected works:
Piefkedenkmal – the construction of a monument for the musician Gottfried Piefke, who is also the namesake of the well-known
Austrian derogatory name for Germans (2009 Gänserndorf)
Samenschleuder – a tool for environmentally conscious car
driving (2009 Weinviertel, Lower Austria)
bm:dna – the government department for dna-analysis (2005
Vienna)
Tracker Dog – follow a (your) dog and track the route with a GPS,
then print and distribute new walking maps (2008 Mostviertel,
Lower Austria)
Community Game – a tool for distributing government grants
using a mixed system of democratic vote and randomized control
(2006 Vienna – distributing 125.000 Euro)
whispering bones – a theatre play asking for the whereabouts of
A. Hitler´s bones (2004 Vienna, rta-wind-channel)
Reply – mailing action: resending Mozart´s begging letters
under our own name to 270 people: to the 100 richest Germans
and Austrians, to managers and artists of the classical music
business, and all members of the Austrian government (2005/06
Vienna)
Re-Entry: Life in the Petri Dish-Opera for Oldenburg 2010
www.wechsel-strom.net, www.piefkedenkmal.at
www.samenschleuder.net, www.trackerdog.at
Designing with Biological
Generative Systems:
Choice by Emotion
xCoAx 2015
Computation
Communication
Aesthetics
and X
Glasgow
Scotland
2015.xCoAx.org
Raul Pinto
Department of Industrial Design, Izmir University of Economy,
Izmir, Turky
[email protected]
Paul Atkinson
Shefield Hallam University, Shefield, UK.
[email protected]
Joaquim Vieira
Department of Material Engineering and Ceramics, University of
Aveiro, Aveiro, Portugal
[email protected]
Miguel Carvalhais
ID+, Faculty of Fine Arts, University of Porto, Porto, Portugal.
[email protected]
Keywords: biological design, generative, customization,
emotion
Consumers as co-producers or co-designers are frequently presented as the solution for mass-customization, but the success
of these systems as enhancing emotional bonds between user
and object seems to be questionable. Making choices may not be
enough to generate a bigger connection between people and their
things. Artifacts produced using biological systems with generative potential, where nature’s randomness and physiological processes have an important role in the deinition of form, may have
the capacity to foster the emotional connections that are missing, arising from nurturing and from an understanding of their
morphogenesis, from the proximity and time required for their
growth and development.
290
1 Introduction
More than thirty years ago Alvin Tofler in The Third Wave (1980)
projected that the consumer would be integrated into the production process and that goods and services would be self-customized to a point where consumption and production would be
intertwined as one. He called this producer-consumer a prosumer.
It seems like Tofler wasn’t completely wrong, as we see many
companies shaping their business plans to integrate users into
their design and production processes (Piller, 2004), but he wasn’t
also completely right.
In The Paradox of Choice, Barry Schwartz points out that the
lack of success of these systems based on co-production or co-design resides mainly on the fact that consumers don’t know or don’t
want to make choices: “As the number of choices grows further,
the negatives escalate until we become overloaded. At this point,
choice no longer liberates, but debilitates. It might even be said to
tyrannize.” (2005). This is where mass-customization may lead to
“mass confusion” (Teresko, 1994) due to great uncertainty and the
burden of choice. (Piller, 2004)
Digital generative systems may be part of the solution, their
capacity to produce new designs automatically, modifying one
form into another with algorithms guarantying a unique outcome
each time; this means that with one single choice – when to interrupt the process – the consumer obtains a one-of-a-kind product.
Although we can see great potential in digital fabrication
(mainly additive manufacturing) for the production of complex,
unique and innovative artifacts, as the technology presents itself
today, it has many limitations when compared to production with
standard manufacturing methods, not guaranteeing the quality
one can expect in a consumer good (Grimm, 2012).
In biological systems with generative potential, where nature’s
randomness and physiological processes have an important role
in the deinition of form, we understand that artifacts have the
capacity to foster emotional connections that arise from their
nurturing and from an understanding of their morphogenesis,
from the proximity and time required for their growth and development. Choice in this scenario may not be a burden but rather a
pleasurable action like feeding a pet or watering a plant.
These systems seek to develop artifacts in a sprouting stage as
well as the constraints for their growth. Artifacts resulting from
these processes are the result of a close relationship between the
various constituent elements, as the system will only outcome in
a inal product if it is understood and nourished. The end result
is singular and unique, with aesthetic qualities that arise from
the understanding of the artifact and the connection created with
291
it. In this context, there artifacts are individualized, more than
customized.
We are developing a series of DIY matrices for the production of
artifacts made with mycelia (the vegetative part of a fungus, consisting of a network of ine white ilaments) in an embryonic stage,
to be distributed to users that will be asked to nurture them into
inal objects; in this process each user will nurture their artifact
into a inal object, where all options will be of their responsibility, from sunlight exposure to interruption of growth. To better
understand how individuals respond to this type of objects and to
the choice making, each user will be requested to register the daily
evolution of their artifact and to describe their feelings towards it.
2 Context
In The Meaning of Things, Domestic Symbols and the Self, Mihaly
Csíkszentmihályi and Eugene Rochberg-Halton, afirm that to
most people, plants are one of the most cherished possessions in
the household. They defend that this happens due to the “slow,
growth-producing nurturance and life-giving concern”, we can
also add that because a plant is a living thing with an existence of
its own, we tend to look at it differently than we do to inanimate
objects (1981). Bruce Sterling in Shaping Things forecasts a near
future where humans and objects are part of “comprehensive and
interdependent” systems, in a “technosocial” culture (2005).
Biological systems that are generative or have generative potential can produce artifacts that provoke new ways of relating to our
things, questioning the standardization seen in mass production,
as stated by Deyan Sudjic in The Language of Things: “the role of
the designer when working for the industry is more than the one
who conceives the form of things, it is to think out the interaction
between people and the artiicial world, and in particular how we
become attached or not to things”(2009).
Projects like Veiled Lady by Studio Eric Klarenbeek and Silk
Pavillion by the MIT Media Lab are examples of how objects can
evolve from an embryonic stage into complex unique artifacts if
they are nurtured and understood, and can reinforce the relationship between users and their things.
Veiled Lady is part of the The Mycelium Project - Print and Grow.
Using a 3D printer with two independent extrusion nozzles, an
inoculated straw based substrate was deposited inside bioplastic structures printed at the same time with the coniguration of
a bench and, after a few weeks it bloomed. The growth process
was interrupted by dehydrating the mycelia resulting in a stable
unique product (Klarenbeek, 2014).
292
Fig. 1 Veiled Lady by Studio Eric
Klarenbeek © Studio Eric Klarenbeek
2014
In Silk Pavillion, A structure was made out of a silk thread laid
down by a CNC (Computer-Numerically Controlled) machine. A
swarm of 6,500 silkworms was positioned at the bottom rim of the
structure, and autonomously reinforced the gaps across CNC-deposited silk ibers. Following their pupation stage the silkworms
were removed (Oxman et al., 2013).
Fig. 2 Silk Pavillion by MIT Media Lab
© Steven Keating 2013
3 Testing
A small series of DIY casts and step-by-step instructions will be
distributed to allow people to build their own matrix and grow
their own product with the intention of better understanding how
individuals respond to these objects. The casts will consist of a STL
(Stereo lithography) 3D printable format and a PDF drawing of the
cutting dimensions for a plastic sheet. After being printed and cut,
these materials are easily assembled and illed with mycelia inoculated straw. To ease the users’ job we recommend the transfer
of the content of a commercial mushroom kit into the predeined
form. Dimensions will be constrained by the printing volume of
an average low-cost 3D printer, and the initial user group will be
293
selected among people with some experience with commercial
mushroom growing kits. The choice of this user group guarantees
some familiarity with the nurturing process and can give us an
emotional comparison between a traditional commercial kit with
the only focus on producing edible mushrooms and the possibility
of giving the substrate a second use.
Each user will be asked to nurture their artifact into a inal
object, and for this they will have to follow the normal instructions of the familiar commercial kit. All options will be of their
responsibility: sunlight exposure, room temperature, when and
how much to water, growth interruption, etc.. Each user will be
asked to make a log of their options and a photographic register of
the mycelia’s expansion and mushroom growth and a questioner
will be used to understand their feelings towards it the various
stages.
Natural forms are continually modiied during growth by their surroundings. Theoretically all the leaves of a single tree should be identical, but this could only happen if they were able to grow in surroundings completely devoid of outside inluences and variations. All oranges
should have an identical round shape. But in reality one grows in the
shade and another in the sun, another in a narrow space between two
branches, and they all turn out to be different. This diversity is a sign of
life as it is actually lived. The internal structures adapt themselves and
give birth to many diverse forms, all of the same family but different
(Munari, 2008:167).
The system and the initial template will be designed, leaving
most of the growth constraint choices for the user. We believe
that a greater awareness that their actions helped deine the inal
object, will also generate a greater tie-in between user and object,
a connection by emotion and understanding more than the mere
relationship of possession.
In the presented case, the iling of the cast results in a hollow
conical geometry that can be used as a suspending lamp shade,
we understand that proposing an artifact that can have some kind
of utility will help the users to easier relate to it and will facilitate
their ability to question its aesthetic qualities by having the possibility to compare the object to a well-known, common product.
The option of designing an artifact with a simple geometry has
the intent that the growth of the mushrooms will have a bigger
emphasis in the overall aspect.
We understand that the outcome of these systems may not be
perceived as having the traditional attributes that are connoted
with quality products, one has to be connected to the artifact by
294
the whole understanding of the process and not only simply by
looking at its surface. As Donald Norman explains:
Attractiveness is a visceral-level phenomenon – the response is entirely
to the surface look of an object. Beauty comes from the relective level.
Beauty looks below the surface. Beauty comes from conscious relection and experience. It is inluenced by knowledge, learning and culture.
Objects that are unattractive on the surface can give pleasure. Discordant music, for example, can be beautiful. Ugly art can be beautiful
(2004:98).
The problem is that we still let logic make decisions for us, even though
our emotions are telling us otherwise. Business has come to be ruled by
logical, rational decision makers, by business models and accountants,
with no room for emotion. Pity! (2004:21)
By comparing the questioners we aim to be able to better understand if and how the emotional connection evolved between user
and object, the daily photograic register may offer a better understanding on how the base geometry evolved into its inal form and
what factores motivated the variations.
We intend that before the end of March 2015 the user group will
be deined and briefed to initiate the experimentation with the
proposed templates, if we consider the average growth rhythm of
the mushrooms, the inal results should be ready before May 2015,
giving us time to analyse the data before June 2015.
4 Conclusion
In systems that rely on the consumer as a co-producer or co-designer, the way choice making is forced on them can be a problem,
and does not guarantee a greater empathy between a person and
their objects. To achieve artifacts that are traded in an embryonic stage and that rely on a biological actuator with generative
potential to produce unique individualized outcomes, but at the
same time, are dependent on the user for their evolution and inal
conformation is one of the expected results.
In the same way we can say that when a plant grows it is also
responding to its grower, and that this creates unique bonds that are
different from those common between people and their inanimated
things. We look forward to the idea that these systems will catalyze
greater empathy between objects and their users although they are
not living artifacts themselves but the result of a living system.
295
References
Csíkszentmihályi, Mihaly, and Eugene Rochberg-Halton. The Meaning
of Things: Domestic Symbols and the Self. Cambridge: Cambridge
University Press, 1981.
Frank Piller et al., From mass customization to collaborative customer
co-design. European Conference on Information Systems (ECIS). Turku:
Finland, 2004.
Grimm, Tood. Additive Manufacturing is a Poor Substitute ,TCT. Tattenhall
UK: Duncan Wood, 2012.
Klarenbeek, Eric. Mycelium Project 2.0 - Veiled Lady. Zaandam, 2014.
Munari, Bruno. Design As Art, London: Penguin Books, 2008.
Norman, Donald A. Emotional Design: Why We Love (or Hate) Everyday
Things, New York: Basic Books, 2004.
Neri Oxman et al. Silk Pavillion: CNC Deposited Silk & Silkworm
Construction - MIT Media Lab. Mediated Matter [Online]. Available:
http://matter.media.mit.edu/ee.php/environments/details/silk-pavillion
[Accessed 10-06-2013], 2013.
Schwartz, Barry. The Paradox of Choice: Why More Is Less. New York:
Harper Perennial, 2005.
Sterling, Bruce. Shaping Things, Cambridge, Massachusetts: The MIT
Press, 2005.
Sudjic, Deyan. The Language of Things - Design, Luxury, Fashion, Art: how
we are seduced by the objects around us. London: Penguin Books Ltd,
2009.
Teresko, John. Mass customization or mass confusion? Industry Week/IW
243, 45, 1994.
Tofler, Alvin. The Third Wave. New York: Bantam Books, 1980.
Objects with Multiple Sonic
Afordances to Explore
Gestural Interactions
xCoAx 2015
Computation
Communication
Aesthetics
and X
Glasgow
Scotland
2015.xCoAx.org
Olivier Houix
STMS IRCAM-CNRS-UPMC, Paris, France
[email protected]
Frédéric Bevilacqua
STMS IRCAM-CNRS-UPMC, Paris, France
frédé[email protected]
Nicolas Misdariis
STMS IRCAM-CNRS-UPMC, Paris, France
[email protected]
Patrick Susini
STMS IRCAM-CNRS-UPMC, Paris, France
[email protected]
Emmanuel Flety
STMS IRCAM-CNRS-UPMC, Paris, France
[email protected]
Jules Françoise
STMS IRCAM-CNRS-UPMC, Paris, France
jules.franç[email protected]
Julien Groboz
Independent Designer, Paris, France
[email protected]
Keywords: design, sound synthesis, interaction.
297
We present a familly of sonic interactive objects called Stonic.
They are designed to provide users with different affordances,
i.e. action possibilities, associated to speciic sound feedback.
These objects are used in experimental studies to explore how
augmented auditory feedback inluences the object manipulation.
We selected a set of basic interactions based on studies on the
auditory perception of physical interactions producing sound,
These interactions correspond to different ways of manipulating
the objects, leading to a set of design. requirements. Twenty initial objects were made with acrylic resin and/or polystyrene. Each
different shapes were tested in order to select a smaller number
of objects affording a wide variety of actions. The selected shape
were inalized using 3D printing and equipped with several sensors: Force-Sensing resistor (FSR), Piezos and an Inertial Measurement Unit. Speciic software was made to enable real-time
recognition of the different interactions and for the mapping of
each actions to speciic sound processes.
298
1 Introduction
The development of interactive devices has challenged the traditional design approaches, by extending the concept of product
usability to the user experience concept, which includes user’s
personal goals, expectations and emotional aspects (Pucillo and
Cascini, 2014).
Our interaction with physical objects is multimodal by engaging not only the vision, but also touch, proprioception and sounds,
leading to a holistic experience. We are particularly intrested in
the sonic aspects of the interaction that has generally been less
addressed. Namely, our research is part of the emerging ield
called Sonic Interaction Design (Franinovic and Serain, 2013)
that focuses on the design of sonic feedback, taking into account
the user-system relationships from an active and dynamic point
of view. One objective of sonic interaction design is to extend
the use of interactive object with sounds to achieve a variety of
do-goals and be-goals trough different motor-goals (Hassenzahl,
2013). In particular, we are interested here in designing the sonic
interaction produced by the manipulation of objects, by augmenting them though sensors and sound synthesis. This implies modeling the different interactions between the user, the object and
its environment.
Designers and researchers developed numerous examples of
interactive sonic objects, dedicated to experiments (design and/
or artistic). These objects have been also designed to study the
impact of continuous sound feedback on different aspects of the
user experience: from performance to learning, taking both emotional and aesthetic dimensions. However, these objects generally
focused only on one or few basic gesture interactions. For example, the Ballancer (Rath and Rocchesso 2005) allows the user to
tilt a wooden fence producing a sound linked to this simple action.
Other researchers (O’Modhrain and Essl, 2004) investigated the
interaction between sound and touch through different prototypes: the Pebble Box (manipulation of stones) and the CrumbleBag (crumbling action). The Spinotron (Lemaitre et al., 2009) is
an object with a pumping action affordance. Clicking sounds generated by a physical impact model simulates the rotation of a virtual gear inside it. While these different objects can be used with
a limited action set, we intend here to develop objects that affords
a variety of gesture interactions and sonic feedback, i.e. offering different motor-goals. With this goal in mind, we designed
manipulable sonic interactive objects, called, Stonic, augmented
with sensors driving sound synthesis.
While sharing similar technological aspects, this research can
be distinguished from most of the objects and interfaces developed
299
in the ield of New Interfaces for Musical Expression (NIME). The
aim here is not to produce a speciically sound or musical expressive results, but rather to focus on the object manipulation: we
are interested to studying how the interactive sound design can
inform and inluence the object manipulation. Our objects are
thus designed to investigate how the action-sound relationship
(arbitrary, metaphorical, analogical) can inluence the manipulation of the object, and how the different types of morphological
sound characteristics can inluence the user agency, i.e. producing the sense the user is “in control” of the sound they produce
(Knoblich and Repp, 2009).
The paper is structured as follows. We introduce the theoretical
bases that guided us in the design of the object’s shape and the
possible interactions with this object. We present the hardware
and software development.
2 Physical Interactions
The theoretical bases that have motivated the object design come
from two different sources. The irst one draws from studies on
environmental sound perception and the second one corresponds
to the literature on manual gestures.
The physical interactions producing sounds have interested
researchers in order to understand the different mechanisms
associated to sound perception. In the continuity of Gaver’s work
(Gaver, 1993), we have studied the categorization of environmental sounds (Houix et al. , 2012). The results indicated a distinction between discrete solid interactions (e.g., impacts, multiple
impacts) and continuous solid interactions (e.g., tearing, shaking, rubbing, ...). These different interactions are the basis of our
requirements for the object design, since we are interested in the
objects manipulations related to the sound production.
In another domain, Napier (Napier, 1956) has proposed taxonomy of manual gestures during object grasping that differentiates
a gesture requiring power and another requiring precision. This
framework allows us to analyze how people grasp objects and to
relate these actions to sound production.
3 Requirements and Design
Our approach is to deine gestures and actions that are relevant
in the study of gesture-sound relationships. For this, we started
with a set of basic gestures associated with the manipulation of
the objects. We then designed appropriate shapes, and equipped
some 3D printing versions of the objects with sensors.
300
3.1 Basic Gestures and Forms
We started with the lexicon that described different types of
interactions producing sounds. The lexical analysis of sound categories (Houix et al, 2012) have shown a distinction between discrete interactions, like impacts or cyclic movements and continuous interactions like deformation (to crumple, to crush, to rub,
to roll, ...). Actions like to crease or to crumple where excluded in
this irst study that was restricted to the interaction with solid
object. We also removed actions such as cutting or to sawing
which would require using tools. We inally selected ten actions:
to hit, to rub, to roll, to turn, to swing, to put, to shake, to press,
and to play with / to crush. These actions are directly related to
lexicon of solid interaction categories (Houix et al, 2012). User can
produce these actions directly by manipulating an object with
one or two hands, in contact or not with a surface. These actions
can imply low or high energy. The actions cover also the different
hand manipulations (power & precision grasp, prehensile vs. non
prehensile, motion of the hand or within the hand or no motion)
that have been classiied previously (Bullock, 2013). This repertory of actions and conditions of manipulation consituted the
basic requirements for the design of the shape. We started with
twenty initial prototypes made with acrylic resin and / or Polysterene (made at scale 1.0). Each object exhbites a different shape
and different behaviors, for example offering swinging motion
like Weeble1 . We made a irst selection based on the speciications (Figure 1). We tested the different actions produced within
the hand, on the object and the object in contact to the surface
during an behavioral experiment in order to test the different
affordances without sonic feedback.
Fig 1 The selected prototyped shapes
and with their associated 3D print
version (when available).
1 https://en.wikipedia.org/wiki/
Weeble
301
3.2 Hardware and Software
3D Print objects are built with a neutral material (ABS), and
equipped with different sensors. The sensor data are processed
in order to recognize the different interactions, and mapped to
different sound synthesis systems. The electronic part is based on
a Wii module combined with a micro-controller, that follow-up
of previous systems (Rasamimanana et al., 2011). The object
contained an integrated 9 DOF inertial measurement unit (a triple-axis gyro, a triple-axis accelerometer and a triple-axis magnetometer) allows us to derive the absolute angles. These sensors
give the absolute orientation of the gravity (up or down), the relative rotation speed and the acceleration (for example: shaking).
The object contains also a force-sensitive-resistor (FSR ) and two
Piezo sensors that are connected to the main board through I2C
using a Teensy 3.0 development board. The piezos allow us to
capture rubbing or tapping and the FSR a gradual pressure. The
sensor data are processed in order to differentiate the different
actions, such as rubbing, tapping, shacking, ..., and to drive sound
synthesis. The mapping strategies, combining both discrete and
continuous strategies, are extensively based on machine learning
methods that allow performing both recognition and mapping.
Speciically, Multimodal Hidden Markov Models (MHMMs) are
used to learn the mapping between movement features and sound
synthesis parameters. The sound synthesis uses recorded sound
material processed with granular synthesis and descriptor-driven
corpus-based concatenative sound synthesis (Schnell et al, 2009),
optionally complemented with physical models. The system is
implemented in the Max6 environment (Cycling’74). We also use
a method called “mapping by demonstration”, by recording examples of actions performed synchronously with sound examples,
and using interactive machine learning techniques. This allows to
quickly prototype, experiment and adapt soniication strategies
in the design process, and could allow users to craft themselves
the sonic interaction without expert programming knowledge.
Summary and Perspectives
We present the design sonic interactive objects that provide affordances for different type of basic gestural interactions. We selected
speciic interactions that can be related to sounds produced by
physical interactions. These experimental devices are equipped
with different sensors allowing us to recognize these different
interactions and mapped the sensor data to various sound processes. These objects will be used for evaluating the inluence of
302
the sound feedback in object manipulation, as well as the change
in the perception of the object affordances.
Supplemential materials are online:
http://legos.ircam.fr/stonic/.
We acknowledge support from the Legos project (ANR 11 BS02 012)
References
Pucillo, Francesco, and Gaetano Cascini. « A framework for user
experience, needs and affordances ». Design Studies 35, no 2 (2014):
16079.
Franinović, Karmen, and Stefania Serain. Sonic Interaction Design.
The MIT Press, 2013.
Hassenzahl, Marc. « User Experience and Experience Design ». In The
Encyclopedia of Human-Computer Interaction, 2nd Ed., (ed) Mads
Soegaard et Rikke Friis Dam. Aarhus, Denmark: The Interaction Design
Foundation, 2013.
Rath, Matthias, and Davide Rocchesso. « Continuous sonic feedback
from a rollling ball ». IEEE Multimedia 12, nº 2 (2005): 60-69.
O’Modhrain, Sile, and Georg Essl. « 6 Perceptual Integration of Audio
and Touch: A Case Study of PebbleBox ». In Sonic Interaction Design,
(ed) Karmen Franinovic et Stefania Serain, MIT Press., 203-211, 2013.
Lemaitre, G., O. Houix, Y. Visell, K. Franinovic, N. Misdariis, and
P. Susini. « Toward the design and evaluation of continuous sound in
tangible interfaces: the Spinotron ». International Journal of HumanComputer Studies 27 (2009): 976-93.
Knoblich, Günther, and Bruno H. Repp. « Inferring agency from
sound ». Cognition 111, nº 2 (2009): 248-62.
Gaver, W. W. « What is the world do we hear ? An ecological approach
to auditory event perception ». Ecological Psychology 5 (1993): 1-29.
doi:10.1207/s15326969eco0501_1.
Houix, Olivier, Guillaume Lemaitre, Nicolas Misdariis, Patrick Susini,
and Isabel Urdapilleta. « A Lexical Analysis of Environmental Sound
Categories ». Journal of Experimental Psychology: Applied 18, nº 1
(2012): 52-80.
Napier, John R. « The prehensile movements of the human hand ».
Journal of bone and Joint surgery 38, nº 4 (1956): 902-13.
Bullock, Ian M., Raymond R. Ma, and Aaron M. Dollar. « A handcentric classiication of human and robot dexterous manipulation ».
Haptics, IEEE Transactions on 6, n 2 (2013): 129-44.
303
Rasamimanana, Nicolas, Frederic Bevilacqua, Norbert Schnell,
Fabrice Guedy, Emmanuel Flety, Come Maestracci, Bruno
Zamborlin, Jean-Louis Frechin, and Uros Petrevski. « Modular
musical objects towards embodied control of digital music ». In
Proceedings of the ifth international conference on Tangible,
embedded, and embodied interaction, 9-12. TEI ’11.
Schnell, N., Röbel, A., Schwarz, D., Peeters, G., and Borghesi, R.
“MuBu & Friends: Assembling Tools for Content Based Real-Time
Interactive Audio Processing in Max/MSP”. In Proceedings of the
International Computer Music Conference (ICMC), Montreal, Canada,
2009.
The Maximum Score
in Super Don Quix-ote
xCoAx 2015
Computation
Communication
Aesthetics
and X
Glasgow
Scotland
2015.xCoAx.org
Paul Keir
School of Engineering and Computing, University of the West of
Scotland, UK
[email protected]
Keywords: Videogames, Retrogaming, Laserdisc, Artiicial
Intelligence, Computer Vision, Easter Eggs
Arcade laserdisc videogames were pioneered by the original 1983
release of Dragon’s Lair from Advanced Microcomputer Systems.
Alas, the punishing gameplay mechanics of Dragon’s Lair left many
players frustrated. The 1984 laserdisc game, Super Don Quix-ote,
from Japanese developer Universal, continued to employ a traditional animation technique, while including on-screen prompts,
providing the player with a helpful indication of the correct
response to each challenge. By completing Super Don Quix-ote,
without loss of life, a maximum score of 636500 can be achieved
through routine gameplay bonus mechanisms. Super Don Quixote, however, also includes undocumented support for alternative responses to the on-screen prompts. In the project described
here, the open-source Daphne laserdisc emulator; along with
Super Don Quix-ote software including a binary ROM image of
the game itself and associated video iles; are provided as input
to a computer vision system, proving a perfect score of 776500 is
possible; then conirmed by the human hand. A video of the full
playthrough is available at https://youtu.be/ZpzWhfh92F4, and
submitted to the Twin Galaxies gaming records organisation.
305
Fig. 1 Climactic scenes from Super
Don Quix-ote (1984) courtesy of the
Daphne Laserdisc Emulator
1 Introduction
During the golden era of arcade videogames, from the late 1970s
to the late 1980s, the graphical content of games was technically, and arguably artistically, superior to that provided by home
entertainment systems. Barring exceptions such as Battlezone
(1980), Cube Quest (1983), or I, Robot (1983), arcade games were
constructed using 2D sprite-based raster graphics; with noteworthy examples from 1984 including Pac-Land, Marble Madness and
Kung-Fu Master.
The 1983 appearance of the arcade laserdisc title Dragon’s Lair
presented a step-change in the visual quality of arcade games. A
Dragon’s Lair player is presented with what appears to be a traditionally animated cartoon. The catch is that the delightful animation will soon end abruptly in the avatar’s death, unless the
player takes singular action at precisely the moment intended by
the game designer. Such intermittent moments of challenge in
videogames would ultimately become known as quick time events
(QTE); after the relative success of Sega’s Shenmue in 1999. An
alarming aspect of the seminal QTEs in Dragon’s Lair, however,
is that they are not accompanied by an on-screen prompt. Consequently, the player must instinctively, and frequently, respond
to the subtle and leeting dangers embedded within the game; by
one of four moves from the joystick, or a press of the button.
Super Don Quix-ote (SDQ) was released in 1984, and its Japanese
heritage can be discerned in the anime character design and animation; and to a comparable degree, by its cheesy and bombastic
American dub localisation. The signiicant gameplay difference in
SDQ, is that the QTEs are accompanied by an on-screen prompt;
one of four rather jarring blue arrow icons, inviting an up, down,
left or right movement of the joystick; or a green button icon,
prompting depression of the sole physical button.
Gameplay is supported by a damsel in distress story theme,
wherein the eponymous hero must rescue his lady love from a
demonic witch. References to the 17th century Spanish novel by
Miguel de Cervantes, The Ingenious Gentleman Don Quixote of La
Mancha, are minimal: a young Quixote retains the squireship
of Sancho (Panza) and his donkey (Dapple); with the ingénue
addressed as Isabella rather than Dulcinea. A notable windmill
appears towards the end of the game, replete with giant.
306
1.1 Gameplay and Scoring Details
Typically a player who fails to respond to a QTE, or who responds
incorrectly, will see the action cut to a scene involving Quixote’s death; accompanied by a decrement in the lives tally. If lives
remain, gameplay will resume at the start of a level which has yet
to be completed. An ad-hoc score bonus is awarded immediately
after each successful QTE; and a player completing a level with
no loss of life, is awarded the sum of the score bonuses from that
level. Having completed the entire game, a inal bonus is rewarded,
equal to; where is the number of lives remaining.1 A player completing the game without loss of life can achieve a score of 636500.
This is, however, not the maximum score possible.
Fig. 2 The player is rewarded when
using Button; Left; Button; Left;
Left; Button; Down; Left; Left;
Right; Left; Button; Left & Button
(lexicographical ordering) instead of
the on-screen prompts shown.
A selection of QTEs in SDQ allow responses distinct from those
invited by the on-screen prompts. Figure 2 illustrates all fourteen
such occasions. For example, the game’s penultimate QTE prompt
is shown bottom-right in Figure 2. An arrow invites a leftward
movement of the joystick. Such a gesture is of course permitted;
yet so is a button press. These 14 QTEs are sprinkled throughout
the game, and each valid QTE alternate response provides the
player an additional score bonus of 10,000. This project identiies
all such QTE alternatives using custom software which exhaustively tries all possible responses to the 156 QTEs in SDQ. With
this information, the game is subsequently completed by the
author to obtain the maximum possible score in SDQ of 776500.
2 Software Development
1 An extra life is awarded when a
score of 100,000 is achieved. Play
starts with 3 lives by default.
To discover the full set of alternate QTE responses, an exhaustive
automated search was planned. Two approaches then presented
themselves: either modify the low-level source code of the Daphne
(Ownby 2001) laserdisc emulator; or scan and analyse the display
buffer using computer vision methods. With the source code for
Daphne 32-bit, and the development system 64-bit Ubuntu, the
irst option was always on the back foot. The second approach was
then selected; offering the attractive possibility to retarget software components towards other games or applications.
307
Obtaining a handle to the display buffer of the relevant X Window on Ubuntu is simpliied by the libxdo library; readily available in the package manager as libxdo-dev. As libxdo is a C library,
the inclusion of the xdo.h header must be guarded by the extern
“C” linkage speciier. Listing 1 need then only check that a single
matching window was found; list[0] provides that window handle.
Listing 1 Code to locate an X
Window named “DAPHNE:”
Having the Daphne emulator’s X window, the libX11 library is
used to obtain its width, height, and screen coordinates. A further
library from the package manager, Imlib2, facilitates straightforward interaction with the display buffer. Assuming x, y, w and h
hold the size and location information, Listing 2 demonstrates
code to obtain a pointer, d a t a , to a contiguous array of 32-bit
Alpha-Red-Green-Blue (ARGB) data; D ATA 3 2 . It is also straightforward to save this as an image ile.
Listing 2 Code to obtain fast access
to the X display buffer
2.1 Recognising On-Screen Prompts
The QTE on-screen arrow prompts are not subtle; but usefully,
are comprised largely of a single shade of blue. Using the minimal
grabc2 tool, a mouse click will reveal that its hexadecimal ARGB
representation is 0 x f f 0 0 f f d 8 j 3; while the arrow’s red shadow is
0 x fffd 010 0 . The four different QTE arrows can be recognised by
traversal of the data array from Listing 2, looking for horizontal runs of blue pixels, followed by a shorter run of red; and vice
versa. The green (0 x ff 0 0 fe 0 0 ) on-screen button prompt is handled similarly.
2 Grabc is also available from the
Ubuntu package manager.
3 Different display drivers will
generate different colour values.
2.2 Remote Control
With both the X window and a libxdo handle from Listing 1, commands to emulate the press and release of keys may be sent to the
Daphne SDQ window as shown in Listing 3.
308
Listing 3 Example of code which
emulates the press and release of the
right arrow key
2.3 Knowing the Score
Alas, the score also requires comprehension. Analysis of the changing score can inform the algorithm as to whether an attempt at an
alternate QTE response has been successful or not. An unchanging score can also evidence the loss of a life; and thankfully we
can thereby avoid analysis of the life tally digits. The differences
between score digits were more subtle than those between arrows,
and a training set was obtained by hand; shown in Figure 3. Five
horizontal scan lines were positioned to emphasise differences
between the 10 digits. As with the on-screen prompts, the simplistic colour scheme of the score digits eased the matching process; though the white pixels of each digit do host minor variations, requiring a fuzziness in the matching of “white”; otherwise
often 0 x fffc ffd 9 .
Fig. 3 The 10,000s in the screenshots
above provide the digits 0-9 to assist
score recognition
4 Ininite lives merely stops the lives
tally (default is 3) falling, and has no
further effect on scoring.
2.4 Unique Prompt Identiication
The algorithm begins knowing the 156 conventional QTE
responses. The DIP switches of SDQ, also supported by Daphne,
allow an ininite lives4 option. Ininite lives are useful in reducing
the time required to search through all moves; an effect which
becomes more pronounced as the game progresses. Nevertheless,
the outside possibility of multiple correct alternate responses to
a single QTE; and with that being the last QTE of a level, means
that the algorithm must be capable of starting a new game; following the successful completion of the last one.
Unique identiication of each on-screen prompt is complicated
by the pseudo-random level order following a death event. Perceptual hashing was ruled out due to the observation that the
lifetime of on-screen prompts may bridge a cut in the animation. Ultimately, it was discovered that the screen coordinates of
the irst on-screen prompt of a level, were unique. Knowing the
309
number of QTEs in a level, together with knowledge of the score,
was suficient to track and identify every QTE uniquely.
3 Conclusions and Future Work
5 The C++ source code is available
at https://bitbucket.org/pgk/sdq_
explorer.
A perfect score of 776500 for SDQ was obtained by locating all possible valid alternate QTE responses automatically using the computer vision methods outline above; informing a subsequent playthrough by the author; available on YouTube. Future work could
ensure the program5 can accommodate different pixel colours and
display drivers; and also alternative screen resolutions, including full-screen. Such affairs are of course somewhat prosaic; the
speciic goal of the project has been achieved. A project which
introduced the QTE icons of SDQ into Dragon’s Lair through the
Daphne emulator could potentially improve its gameplay, and
build synergy between it and SDQ.
References
Matt Ownby. DAPHNE Arcade Laserdisc Emulator. http://www.daphne-emu.
com. 2001.
Exhibition
Skyler and Bliss
xCoAx 2015
Computation
Communication
Aesthetics
and X
Glasgow
Scotland
2015.xCoAx.org
Laurent Segretier
www.segretier.com
[email protected]
Monty Adkins
University of Huddersield
[email protected]
Keywords: Video Music, Glitch Art, Experimental Electronic
Music
Hong Kong remains the backdrop to the science iction movies
of my youth. The city reminds me of my former training in the
inancial sector. It is a city in which I could have succeeded in
inance, but as far as art goes it is a young city, and I am a young
artist. A frustration emerges; much like the mould, the artist also
had to develop new skills by killing off his former desires and
manipulating technology. My new series entitled HONG KONG
surface project shows a new direction in my artistic research in
which my technique becomes ever simpler, reducing the traces
of pixelation until objects appear almost as they were found and
photographed. Skyler and Bliss presents tectonic plates based on
satellite images of the Arctic. Working in a hot and humid Hong
Kong where mushrooms grow ferociously, a city artiicially refrigerated by climate control, this series provides a conceptual image
of a imaginary typographic map for survival. (Laurent Segretier)
Up Down Left Right
xCoAx 2015
Computation
Communication
Aesthetics
and X
Glasgow
Scotland
2015.xCoAx.org
Josh Booth
Jersey City, NJ, USA
[email protected]
Keywords: Generative Art, Aesthetic Function of Contingency,
Permutation Narrative
313
Fig. 1 Up Down Left Right
(https://www.youtube.com/
watch?v=Gv_o-Cw4h9M)
Up Down Left Right is an audio/video installation that generates a
‘permutation narrative’ in realtime based on the concept of contingency. The character in this narrative is a little yellow cursor
that creates bass tones as it moves through an unstable world of
shifting-dissolving color blocks. The cursor’s behavior is solely
governed by its internal ruleset in response to this dynamic environment: white = up, black = down, red = left, blue = right, etc…
An algorithm randomly selects permutations that control how
the color blocks are animated as well as how the sounds are triggered. There are a inite number of color block landscapes that
form around the cursor as it moves. The transformation from one
landscape to another completely depends on the cursor’s coordinates at the time of change: x-coordinate determines time length
of the next landscape; y-coordinate determines which landscape
to trigger. Thus, the cursor’s behavior inluences the environment
just as the environment inluences the cursor’s behavior. Different landscapes offer the cursor different navigational potentialities/limitations that alter and shape the musical patterns and
overall form of the piece. Since the underlying permutations are
randomly chosen, each time the piece runs the cursor’s trajectory will not be the same and the narrative will have a different
sequence of events.
This piece is generated by a compositional system that I wrote in
Max/MSP/Jitter. The system creates permutations of 16 numbers
that control the order in which all sonic and visual events occur.
The 16 numbers are partitioned evenly into 4 sets, called ‘ensembles’. Each ensemble is generally given either a distinct range of
frequencies or exempliies a certain characteristic in some other
musical parameter. The permutations are generated using algebraic group theory, speciically the symmetric four-group. The
symmetric four-group permutes (1) the order in which the four
ensembles occur relative to each other and (2) the four numbers
that make up each ensemble (see igures 2-3). These permutations
however are rarely made explicit on the surface of the work. An
exception is the landscape that has only blue squares whose conigurations represent the ensemble patterns in ig. 2. The Jitter
(visual) component of the system produces two matrices on the
screen: 4 x 16 and 16 x 16. The irst matrix manifests in the color
block landscapes, the second in the cursor’s movements. In the
4 x 16 matrix, the 16 color blocks (often of the same hue) in each
of the 4 rows are animated according to the underlying permutations. The cursor is mapped to bass tones in the 16 x 16 matrix
that increase in frequency from bottom to top of the screen along
the y-axis.
314
Fig. 2 24 Permutations of 4
Ensembles
315
Fig. 3 24 Permutations of the 4
Ensembles, Each Containing 4 Event
Generators
The Maximum Score
in Super Don Quix-ote
xCoAx 2015
Computation
Communication
Aesthetics
and X
Glasgow
Scotland
2015.xCoAx.org
Paul Keir
School of Engineering and Computing, University of the West of
Scotland, UK
[email protected]
Keywords: Videogames, Retrogaming, Laserdisc, Artiicial
Intelligence, Computer Vision, Easter Eggs
317
Data Exploration
on Elastic Displays
using Physical Metaphors
xCoAx 2015
Computation
Communication
Aesthetics
and X
Mathias Müller
Chair of Media Design, Technische Universität Dresden, Dresden,
Germany
[email protected]
Thomas Gründer
Glasgow
Scotland
2015.xCoAx.org
Chair of Media Design, Technische Universität Dresden, Dresden,
Germany
[email protected]
Rainer Groh
Chair of Media Design, Technische Universität Dresden, Dresden,
Germany
[email protected]
Keywords: elastic displays, information visualization, haptic
interaction
319
Unknown Meetings
xCoAx 2015
Computation
Communication
Aesthetics
and X
Glasgow
Scotland
2015.xCoAx.org
Matt Roberts
Stetson University
[email protected]
Terri Witek
Stetson University
[email protected]
Keywords: site-speciic, augmented reality, transportation
321
Unknown Meetings is a site-speciic augmented reality project for
the Glasgow Subway that takes as its premise the awkward and
surreal encounters that occur on daily commutes. Subway riders
activate via smart phone both an “unknown” object moving over
the actual landscape and an accompanying brief poetic audio ile
which considers such encounters.
These are activated whenever the train approaches a station.
Commuters use the free Augmented Reality app Layar on their
smart phones to see a loating image – usually an out-of-place
object – and hear a brief accompanying text. Stations are nexuses of anxiety when we commute – is this our stop? By loating
objects and words that offer still more unexpected juxtapositions,
Roberts and Witek try to shift the anxiety of arrival onto disruptive ephemeral “connections.”
Unknown Meetings evades the political feuds, environmental
upheavals, and social displacement with which ordinary public
transportation is so often burdened. It thus offers a critique of
systems of connection as it disrupts a sleepy morning or weary
late afternoon commute with singularly odd encounters.
Colorigins: Algorithmically
Transforming Subtractive
Color Theory Pedagogy
Brad Tober
xCoAx 2015
Computation
Communication
Aesthetics
and X
University of Illinois at Urbana-Champaign, Champaign, Illinois,
United States
[email protected]
Glasgow
Scotland
2015.xCoAx.org
Keywords: Colorigins, color theory, color mixing algorithms,
gamiication, pedagogy, interaction design, interface design,
design process, Sifteo Cubes, physical/tangible computing.
323
Colorigins is a tactile color mixing and matching game designed
and developed for the Sifteo Cubes tangible computing platform.
By physically manipulating a set of Sifteo Cubes, players attempt
to match a target color by mixing a provided set of source colors. Throughout the process of color mixing, players can gain
experience with key color theory concepts such as value, saturation, tints, shades, tones, complements, chromatic neutrals, and
the relative visual strengths of particular colors. A custom algorithm that references the spectral relectance values of Colorigins’
source colors enables the game’s digital approximation of subtractive color mixture.
1 Introduction
Colorigins (a video demonstration is available at https://vimeo.
com/97997307) is the irst in a series of speculative art and design
learning experiences (designed and developed by the Experimental Interface Lab, the author’s practice-led research entity) that
leverage emerging and novel digital technologies. Some of these
experiences take the form of manipulatives, or learning tools,
intended to transform conventional approaches to art and design
pedagogy.
Colorigins presents a softly gamiied approach to learning elements of subtractive/relective color theory. The game objective
is to match a randomly generated target color by mixing it from a
set of source (Johannes Itten’s conventional primary and secondary) colors. The target color is generated by using Colorigins’ color
mixing algorithm to create a mixture of multiple source colors;
generally, target colors mixed from a greater number of source
colors are more dificult to match than target colors mixed from
fewer source colors. Throughout the process of color mixing, players can gain experience with concepts such as value, saturation,
tints, shades, tones, complements, chromatic neutrals, and the
relative visual strengths of particular colors.
2 Platform and Interface
Colorigins is speciically designed and developed for the Sifteo
Cubes interactive gaming platform. The Sifteo base stores and
runs software built for the platform, connecting wirelessly to up
to twelve 1.7-inch square cubes. The cubes each feature a touch
sensitive LCD, an accelerometer, and proximity sensors so that
the cubes know when and where they are in contact with one
another (Merrill et al. 2012). Colorigins leverages these features to
create an engaging and intuitive color mixing experience, fusing
324
the distinctively physical experience of mixing color (like paint
on a palette) with the advantages of the digital medium (including Colorigins’ custom color mixing algorithm).
3 Color Mixing Algorithm
The physical variables of subtractive color (such as ink/pigment
characteristics, surface quality, and ambient/environmental light)
mean that the primary components of any colors to be mixed are
insuficient for determining the behavior of the mixture (Matsushiro and Ohta 2003). The color mixing model used by Colorigins
holds some of these variables constant to approximate a particular type of color mixing experience.
Instead of using primary component values to calculate a color
mixture, Colorigins uses spectral relectance values. As visible light consists of a spectrum of wavelengths, any subtractive
color can be represented by a series of data values that indicate
the amount of light relected at any particular wavelength. The
source colors for Colorigins are based on a Munsell palette (5R
5/12, 5YR 7/12, 7.5Y 8.5/12, 10GY 6/12, 5PB 5/12, and 5P 4/12)
for which a data set of spectral relectance values has been determined experimentally (Centore 2014). The palette selection was
made to prevent rapid desaturation (as saturation invariably
decreases upon mixture) while maintaining equal color value, as
well as to ensure a perceptually familiar hue gradation.
Given the proportions and the spectral relectance values of
two colors in a desired mixture, a new set of mixed spectral relectance values can be produced by calculating a geometric mean
(Burns 2015). Adjustments must then be made for the ambient/
environmental light (called the illuminant) and gamma. A set
of primary component values, in RGB for on-screen display, can
then be determined following these adjustments.
Acknowledgements
Special thanks to Scott Burns, a Professor Emeritus in Industrial
and Enterprise Systems Engineering at the University of Illinois
at Urbana-Champaign, for his insight into the development of the
color mixing algorithm implemented by Colorigins.
References
Burns, Scott. Subtractive Color Mixture Computation. Scott Burns. Last
modiied April 27, 2015. http://scottburns.us/subtractive-color-mixture/.
325
Centore, Paul. Munsell Resources. Paul Centore. Last modiied April
26, 2014. http://www.99main.com/~centore/MunsellResources/
MunsellResources.html.
Matsushiro, Nobuhito, and Noboru Ohta. “Theoretical analysis
of subtractive color mixture characteristics.” Color Research &
Application 28, no. 3 (2003): 175–181.
Merrill, David, Emily Sun, and Jeevan Kalanithi. “Sifteo cubes.”
In CHI’12 Extended Abstracts on Human Factors in Computing Systems, pp.
1015–1018. ACM, 2012.
326
AR/VR_Putney 1.0.
Interactive Media
Composition as the
Language and Grammar
for Extended Realities
xCoAx 2015
Computation
Communication
Aesthetics
and X
Glasgow
Scotland
2015.xCoAx.org
Alena Mésarošová
Polytechnic University of Valencia, Spain
[email protected]
Ricardo Climent
NOVARS Research Centre, University of Manchester, United
Kingdom
[email protected]
Keywords: Augmented Reality, Virtual Reality, Game-Audio,
VCS3, Putney.
328
The AR/VR_Putney 1.0 is an interactive installation exploring
extensions of our perception of reality. It employs a virtual 1969
VCS3 synthesizer to transcend the boundaries between virtual
and augmented realities throughout sound. In the proposed ludic
experience, visitors co-interact in a game-like immersive environment, to construct the vocabulary of a shared sonic-centric experience, which triangulates between the real, the virtual and the
enhanced. Potentiometers, VU meters, patch-pins and electronic
components spring from the AR markers in the walls and wearable t-shirts, so that users can collect, pick them up and transport
to the virtual VCS3 in the middle of the room. As the synthesizer
parts are being assembled by visitors, the purposely-composed
mosaic of vcs3 original recordings becomes gradually organised.
As a result, sound and music composition provide the integration
for both experiences as extensions of reality.
1 Context
Technologies to augment the human ield of view with virtual
overlays (ie. AR markers for mobile technology) are rarely explored
in combination with 100% computer simulated 3D environments
(using Head-mounted VR displays) in compositional environments. In AR/VR Putney 1.0, we are providing an additional angle
(the aural) to achieve an integrated experience, where sound is
enacted from both the virtual and the augmented experience.
It aims to enhance and share the visitors’ extensions of reality,
while providing an added focus on sound to a technological context, which was born predominantly visual.
2 Technologies employed
2.1 Visual:
Augmented Reality: Unity3D and Qualcomm® Vuforia™ using
AR markers.
Stereoscopic Augmented Reality: Unity3D and Qualcomm®
Vuforia™ using AR markers visualising on custom Head Mounted
Display using Android device and simulation stereoscopic camera
with Unity engine and a number of Android devices sending data
in real time via OSC (OpenSoundControl) via a Wi-Fi signal.
Virtual reality: We are currently using an OculusRift2, which
is connected to a computer running Unity engine which sends
/receives data in real time via OSC. Therefore, the user can
move around the virtual space, detect objects and calculate its
boundaries.
329
2.2 Audio:
Digital Sound Engine: It consists of VCS3 real sound samples and transformations in MaxMSP controlled by the visitor
from Unity via OSC. Example by the author here: https://vimeo.
com/90427321
Analogue Sound Engine (experimental): An analogue modular synthesizer, which includes a Benjolin module (a Rob Hordijk
design) controlled by the visitor from Unity (to Max) and then
using Expert Sleepers digital-to-analogue module (es4encoder~
Max external). See Fig 4. below.
Quadraphonic system (4x genelec 8030s)
2.3 Technical setup:
Fig. 1 Front view of the installation.
Fig. 2 Space organisation map.
330
Fig. 3 Virtual components pick-up
system.
Fig. 4 Virtual analogue VCS3 (left)
and Hordijk’s Benjolin controlled
via an Expert Sleepers digital-toanalogue module.
331
332
333
‘Let’s Talk Business’:
an Installation to Explore
Scam Narratives
xCoAx 2015
Computation
Communication
Aesthetics
and X
Glasgow
Scotland
2015.xCoAx.org
Andreas Zingerle
University of Art and Design, Linz, Austria
[email protected]
Linda Kronman
KairUs Art + Research Lab
[email protected]
Keywords: phone scams, audio installation, interactive
storytelling, reverse engineering, artivism.
335
16th century ‘face to face’ persuasion scams adapted to letters,
telephone, fax and Internet with the development of new communication technologies. In many of today’s fraud schemes phone
numbers play an important role. Various free-to-use on-line tools
enable the scammers to hide their identities with fake names,
bogus business websites, and VoIP services. With the typology of
a sample probe of 374 emails, commonly used in business proposal
scams, the emails were categorized and tested to see how believable the proposals sound once the scammers were contacted by
phone. The research can be explored in a 5-channel interactive
audio installation called ‘Let’s talk business’.
Introduction
Phone fraud can be described as a ‘fraudulent action carried out
over the telephone’ and can be divided into ‘fraud against users by
phone companies’ (cramming, slamming), ‘fraud against users by
third parties’ (809-scams, dialer programs, telemarketing fraud,
caller ID spooing) ‘fraud against phone companies by users’
(phreaking, dial tapping, cloning) and ‘fraud against users by
users’ (vishing, SMS spamming). The different fraudulent actions
can also be divided into technical hacking, social hacking, and
mixes of both. (Rustad, 2001)
Curious anti-scam activists called ‘scambaiters’ adapted more
of the social engineering tactics to ind methods to safely communicate with scammers, inding out how the scams work in
order to warn potential victims. This artwork focuses on the ‘user
to user fraud’ that is done by email and phone scams. Typically
these scams involve storytelling and some sort of social engineering, where the fraudster creates a hyper-realistic ‘too good
to be true’ situation for a mark, in order to extract sensitive data
and/ or money from the victim. (Maggi, 2010) (Mitnick, 2002)
These scambaiters host informative websites where scams are
reported, and forums where people can discuss suspicious business proposals.
Fake businesses and personas can appear more legitimate when
connected to a phone number, enabling faster and more personal
contact to the victims. (Costin, 2013) By using services like Gmail
the scammers gain access to popu-lar VoIP services like Google
talk or Skype. In addition to this, call diversion services offer
scammers a way to hand out a regional phone numbers, yet still
answer to the calls wherever they are. These free tools allow the
scammers to hide their real identities and to be in contact with
the victims using fake names accompanied with diverted contact
numbers. Our intention was to uncover which business proposals
336
and scam schemes are commonly used and how believable the
proposals sound once we called the scammers.
The Dataset
As a raw dataset we took a sample probe of 374 emails with phone
numbers, which were collected over a time period of three weeks
from Nov. 11 to 30, 2014, from the ‘scammed.by’ scam email database. In 2010 this website was created under the name ‘baiter_
base’, a place for scambaiting activists who document the activities of Internet scammers. The website provides a service to send
in suspected scam emails, which are then automatically analyzed,
categorized and published. From the emails we then extracted
the phone numbers per country. The top ive countries, in total
277 emails, were further categorized according to their narratives structures. Using a VoIP service, we then called scammers
from some of the top ive countries, trying to cover a variation of
the ten scam scheme types. Through this experiment we experienced that the phone conversations were very personal, in comparison to the emails: some scammers were very open to explain
their shady businesses, others preferred to use email and keep
the phone conversation as brief as possible. Some of the scammers used voice-morphing software to anonymize their natural
voices resulting in ta rather creepy effect. The conversations with
the scammers were recorded and some of the stories were edited
and can now be listened to through the SPAM-cans in the art
installation.
The Installation ‘Lets talk business’
The installation consists of ive modiied SPAM-cans that are
normally used to store precooked ‘SPiced hAM’ produced by the
Hormel Foods Corporation. According to Merriam-Websters dictionary, the naming of unwanted mass advertisement as ‘SPAM’
originates from ‘the British television series Monty Python’s Flying Circus in which chanting of the word Spam overrides the other
dialogue’. The sketch premiered in 1970, but it took until the 1990s
for mass emails, junk phone calls or text messages sent out by
telemarketers to be called ‘SPAM’. (Templeton) While most of the
scam emails tend to end up in the SPAM folder, we chose to mediate these stories through physical SPAM-cans. Transducers and
audio players are attached to four of the cans, so that visitors can
listen to the scammers’ different narratives that were recorded.
The ifth device has two buttons: one button connects the visitor to a randomly chosen number from a scammers database, the
other button disconnects the call. While most of the scam emails
337
tend to end up in the SPAM folder we chose to mediate these stories through physical SPAM cans.
References
Costin, A., Isacenkova, J., Balduzzi, M., Francillon, A., & Balzarotti,
D. (2013, July). The role of phone numbers in understanding cyber-crime.
In 11th International Conference on Privacy, Security and Trust (PST 2013).
Maggi, Federico, ‘Are the con artists back? A preliminary analysis of
modern phone frauds.‘ Computer and Information Technology (CIT), 2010
IEEE 10th International Conference on. IEEE, 2010.
Mitnick, Kevin D, The Art of Deception. Wiley, 2002.
Rustad, M. L. (2001). Private enforcement of cybercrime on the electronic
frontier. S. Cal. Interdisc. LJ, 11, 63.
Templeton, B., Origin of the term “spam” to mean net abuse, www.
templetons.com/brad/spamterm.html
338
Growing Objects:
Testing with Biological
Generative Systems
xCoAx 2015
Computation
Communication
Aesthetics
and X
Raul Pinto
Department of Industrial Design, Izmir University of Economy,
Izmir, Turky
[email protected]
Paul Atkinson
Glasgow
Scotland
2015.xCoAx.org
Shefield Hallam University, Shefield, UK
[email protected]
Joaquim Vieira
Department of Material Engineering and Ceramics, University of
Aveiro, Aveiro, Portugal
[email protected]
Miguel Carvalhais
ID+, Faculty of Fine Arts, University of Porto, Porto, Portugal.
[email protected]
Keywords: biological design, generative, growth, customization,
emotion.
340
We intend to present the results of a test that is part of an ongoing investigation where artifacts are produced by biological systems with generative potential. In these systems where nature’s
randomness and physiological processes have an important role
in the deinition of form, we understand that artifacts have the
capacity to foster new, emotional, connections that arise from
their nurturing and from an understanding of their morphogenesis. In contrast to mass-production and co-design systems, for
these biological systems to grow into inal products, they have
to be understood and nourished by their users. Their end results
are singular and unique, with aesthetic qualities that arise from
the understanding of the artifacts’ growth constraints and the
bonds that are created with them. The traditional quality canons
of mass produced goods are challenged, as the resulting artifacts
will not get inal shapes that are both polished and free of imperfections, but that are inconstant, gnarly and sinuous.
The test consists on a small series of DIY casts and step-bystep instructions distributed to a group of people to build their
own matrices and grow their own products with the intention to
better understand how individuals respond to these objects. The
casts consist on a STL (Stereo lithography) 3D printable format
and a PDF drawing of the cutting dimensions for a plastic sheet.
After being printed and cut, these materials are easily assembled
and illed with mycelia-inoculated straw. To ease the users’ job,
they will be recommended to transfer the content of a commercial mushroom kit into the predeined form. Dimensions are constrained by the printing volume of an average low-cost 3D printer,
and the initial user group will be selected from people with some
experience with commercial mushroom growing kits, ensuring
some familiarity with the nurturing process and giving us an
emotional comparison between a traditional commercial kit with
the only focus on producing edible mushrooms and the possibility
of giving the substrate a second use.
Each user will be asked to nurture their artifact into a inal
object. For this, they will have to follow the normal instructions
of the familiar commercial kit, and all options will be of their
responsibility: sunlight exposure, room temperature, when and
how much to water, growth interruption, etc.. Each user will be
asked to make a written log of their options and a photographic
register of the mycelia’s expansion and mushroom growth and to
describe their feelings towards it.
The system and the initial template were designed in order
to leave most of the growth constrains choices for the user. We
believe that a greater awareness that their actions helped deine
the inal object, will also generate a greater tie-in between user
341
and object, a connection by emotion and understanding more
than the mere relationship of possession.
All outcomes and examples of systems in various growth stages
will be presented, we also intend to make available to the general
public the templates along with instructions for their replication,
allowing the testing to continue into a larger data gathering.
Attached is a document with the description of a previous test
with a similar technique.
342
Performances
Fluid Control:
Media Evolution in Water
xCoAx 2015
Computation
Communication
Aesthetics
and X
Glasgow
Scotland
2015.xCoAx.org
Christoph Theiler
wechselstrom (artist group) Vienna, Austria
[email protected]
Renate Pittrof
wechselstrom (artist group) Vienna, Austria
[email protected]
Keywords: controller, computer interface, water, electronic
music, video, mass inertia, luid, potentiometer, switch, fader
345
The Scottish School of
Flower Arranging: Rikka
xCoAx 2015
Computation
Communication
Aesthetics
and X
Glasgow
Scotland
2015.xCoAx.org
James Wyness
[email protected]
Graeme Truslove
University of the West of Scotland
[email protected]
Keywords: Live-electronics, improvisation, instrument design
The music of the Scottish School of Flower Arranging seeks to
explore, by means of hand-made acoustic and digital tools, found
objects and materials, an intimate sound world created from a
few simple and basic elements which change slowly over time
and which investigate asymmetry, materiality, irregularity, economy of means, and the emulation of natural processes. The name
is derived from the Japanese zen-inluenced practice of lower
arranging and pays homage to similar socio-aesthetic activities
such as the tea ceremony, where the qualities of restraint, careful
placement of elements and detachment are highly valued, as are
attributes of wabi-sabi such as the imperfect, the impermanent
and the intimate.
Thermospheric Station:
Seeking out the Aesthetics
of Interactive Post-Digital1
Artwork with the use
of Gametrak
xCoAx 2015
Computation
Communication
Aesthetics
and X
Jung In Jung
University of Huddersield
[email protected]
Choreographers:
Glasgow
Scotland
2015.xCoAx.org
Dane Lukic
Glasgow Caledonian University
[email protected]
Stefanos Dimoulas
Royal Conservatoire of Scotland
[email protected]
Keywords: dance and sound collaboration, interactive, sound art
348
Fig. 1 Thermospheric Station – Live
performance at Flux Factory, New
York, in 2014. A dancer is tied onto
a column with the red wires of
Gametrak to represent the power
of gravity. The centre of gravity is
the column not the ground in this
performance. (Choreography by
Dane Lukic and Quentin Burley /
Photograph by Amela Parcic)
1 The term ‘post-digital’ in the
title has the same meaning as how
Kim Cascone used in his essay The
aesthetics of failure: ‘post-digital’
tendencies in contemporary computer
music (2000).
Thermospheric Station is an interactive dance and sound collaboration using the game controller, Gametrak2, as the interface
to connect sound and movement. Gametrak’s motion tracking
system is very simple compared to other interactive technology
now days such as Kinect3. In this article I explain how Gametrak
2 The Gametrak controller was
invented in 2000 by Elliott Myers
was used in Thermospheric Station to create sound and dance
who founded In2Games and designed
improvisation with its very reduced and limited functions. Most
for home video game platforms. A
importantly, I illuminate the aesthetics of the composition as a
standard potentiometer turns as the
wire attached to the base console is
digital artwork with the use of motion tracking technology of the
pulled in and out and the analogue
controller.
information of distance and location
In Thermospheric Station the dancers are expected to create chotransforms to digital data.
reography motivated by the properties of Gametrak and not to use
the device only as a sound controller. Brian Massumi insists, “The
3 Kinect is a motion tracking device
by Microsoft for Xbox games and it
strength of interactive art is to take the situation as its ‘object”
was introduced in 2010 for the irst
but “not a function.” (Massumi, 2011:52) According to the theme
time. It provides full-body 3D motion
‘Gravity and non-gravity’ the choreographers interpreted the sitcapture, facial and voice recognition
capabilities.
uation as if they were in a space where gravity and non-gravity
co-exist. One of them was tied onto a column with the red wires
4 The sound composition is also
of Gametrak to represent the force of gravity, but the other perprogrammed to stop if performers do
former was allowed to move freely as a contrast (Figure 1). The red
not go back to the beginning position.
wires attached to the dancer’s body force the dancer to go back to
To continuously interact with sound
the performers are forced to be back
the beginning position as if the gravity limits the dancer’s moveto the beginning position regularly.
ment4. At the very beginning of the performance sound is almost
synchronised with the fragmented movements of the dancer who
5 A pair of wearable gloves which
is connected to the column. As the other dancer steps into the set
can be attached to the end of wires
the fragments of noise changes to sustained high-pitched note
are included in the Gametrak games
demonstrating the tension between the two dancers. I focused on
package. More information can
be found here: http://uk.ign.com/
composing ‘gestures’ of stretching and pulling sound, and then
articles/2004/09/16/introducing-theprogressed the sound to express the overall tangled ‘situation’
gametrak.
with the wires (Figure 2). Towards the end of the performance
Last accessed 24 April 2015
the synchronisation of sound and movement becomes very loose.
349
This is to let performers to have a moment to express movements
“emerging ‘with’ technology, rather than technology itself” (Stern,
2013:64) to look for an extended meaning of the device as a part
of the composition.
Fig. 2 Thermospheric Station – Live
performance at Flux Factory, New
York, in 2014. A dancer is entangled
with the red wires of Gametrak and
the wires create complex tangled
lines. (Choreography by Dane Lukic
and Quentin Burley / Photograph by
Amela Parcic)
6 http://uk.ign.com/
articles/2006/04/14/exclusivegametrak-interview-with-developerin2games. Last accessed 24 April
2015.
7 http://www.ryoichikurokawa.
com/project/5horizons.html. Last
accessed 24 April 2015.
Gametrak controllers were packaged mostly with golf or ighting games because they were supposed to track physical movement of a player’s hands connected to the base station with two
red wires5. It was released as a sensational controller which could
track moving points of the wires in 3D space with its stability6,
although it did not survive for long in the game market after
Kinect was released. However, the ‘failure’ of Gametrak in the
market inspired me, and I have sought out what made my collaboration special using the aesthetics of this particular technology. Gametrak only can track distance and location of an object
connected to its red wires, and perhaps this reduced capability
was the main reason of the failure. Although it has very limited
technology to track motions, it has a distinguishing appearance
from the other motion tracking devices; The red wires of Gametrak expose tracking points of an object highlighting how the
object is connected to the centre of the device in 3D space and
how it moves from one point to another. This visible analogue
information of the wires’ distance and location transforms to digital data. I could ind a similar aesthetics from Ryoichi Kurokawa’s visual work in Rheo: 5 horizons which shows photographs of
real landscapes and then reveals their symmetrical lines in digital domain behind on ive HD displays7. Similarly, the Gametrak’s
red wires expose the “background” of interactive “digital technology to the foreground.” (Cascone, 2002) with the tangible and
visible method. In this performance my focus of interactivity as
a composer has moved from how to track a performer’s physical
350
movements through technology to the aesthetics of gestures and
shapes created by the extendable wires of Gametrak.
References
Cascone, K. 2000. The Aesthetics of Failure: “Post-Digital” Tendencies in
Contemporary Computer Music. Computer Music Journal Vol. 24, No. 4,
pp. 12-18
Exclusive GameTrak Interview with Developer In2Games - IGN. (n.d.).
Last accessed 24 April 2015. http://uk.ign.com/articles/2006/04/14/
exclusive-gametrak-interview-with-developer-in2games
Introducing the Gametrak - IGN. (n.d.). Last accessed 24 April 2015. http://
uk.ign.com/articles/2004/09/16/introducing-the-gametrak
Massumi, B. 2011. Semblance and Event: Activist Philosophy and the
Occurrent Arts. Cambridge, MA: MIT Press.
Ryoichi Kurokawa. (n.d.). Last accessed 24 April 2015. http://www.
ryoichikurokawa.com/project/5horizons.html
Stern, N. 2013. Interactive Art and Embodiment: The Implicit Body as
Performance. Gylphi Limited Book.
Photography by
Michael Pawlukiewicz
351
Photographs by
Michael Pawlukiewicz
Fermata:
Live Coding Performance
xCoAx 2015
Computation
Communication
Aesthetics
and X
Glasgow
Scotland
2015.xCoAx.org
Pete Furniss
University of Edinburgh
[email protected]
Thor Magnusson
Music Department, University of Sussex
[email protected]
Keywords: Improvisation, Live Coding, Microtonal Composition,
Spatial Music
353
Putney for game-audio
xCoAx 2015
Computation
Communication
Aesthetics
and X
Ricardo Climent
NOVARS Research Centre, University of Manchester, United
Kingdom
[email protected]
Mark Pilkington
Glasgow
Scotland
2015.xCoAx.org
University of Lancaster
[email protected]
Keywords: composition, game-audio, vcs3, analogue modular
synthesis, procedural audio, interactive media
355
Putney “K” is an interactive media work featuring a virtual VCS3
synthesizer created with a graphics-physics-game engine and
originally composed and designed by Ricardo Climent. In 2015
the piece’s concept was further explored in the form of two collaborations, which expanded its existing degree of expression and
interactivity. The irst incorporated Mark Pilkington’s creative
input via the live performance of an original EMS VCS3 analogue
synthesizer (interacting with the virtual one played by Climent),
and the second was a sound installation entitled ARVR Putney
by Alena Mesarosova (Manusamo&Bzika), which combined augmented and virtual reality. This concert version introduces a range
of uncontrolled sonic fantasies (aural paidia, as in R. Caillois’s
typology). The main character (called Putney) is a potentiometer
and sonic scanner retired from a classic 1969 VCS3 synthesizer. To
return home at Putney Bridge in London, the performer must collect components (vernier pots, VU meters, knobs, pins), electronics (PICS, capacitors, resistors) and circuit schematics and needs
to solve a number of aural challenges (ludus) and earn enough
compositional esteem. The game engine’s play-through provides
a dynamic graphic score for Pilkington, while opening communication channels to allow a number of performers to take part in an
extended musical network. The performer of the real VCS3 takes
on the role of a game player and interjects dramaturgy through
the expression and manipulation of the instrument, to form a
dynamic musical interplay. The synthesizer’s unique semi-modular design and wild/chaotic character portrays a disruptive link
Fig. 1 Putney (the Potentiomenter)
as seen in Unreal Engine 4.
356
within the low of the piece, while its tactile embodiment extends
the virtual beyond the frame of the screen. The act of play and
containment becomes the boundaries that provoke intertwined
emotional responses informed by chance, indeterminacy and
algorithmic decision-making processes. Two systems of difference between the poles of the virtual and the real provide a
homomorphic experience in which the viewer becomes engaged
and immersed. The point of intersection of the two sensory ields
is a thrilling an inspiring experience as it coexists between congruence and incongruence. The giant vcs3-gamepad controller for
Putney was commissioned to and designed by Iain McCurdy.
Fig. 2 Mark Pilkington with VCS3
and McCurdy’s giant joystick.
Fig. 3 Putney in the VUMeter room
as seen in Blender Game Engine.
357
Fig. 4 Level Blueprint section
in Unreal Engine 4.
Fig. 5 EMS studio recreated with
Synthi 100 in the background.
Fig. 6 Visit in the summer of 2014 to
Putney Road 277 (original home of
the EMS studios in 1969, currently
a hairdressing shop).
358
Drei Mal Acht || 24
xCoAx 2015
Computation
Communication
Aesthetics
and X
Ephraim Wegner
HS Offenburg, Germany
[email protected]
Astrid Wegner
[email protected]
Glasgow
Scotland
2015.xCoAx.org
Keywords: Digital Signal Processing, Generative Art,
Conceptual Writing, Music
360
Drei Mal Acht || 24 is a performance for a personal computer, a
human being, a “construction kit” of various text fragments, for a
“steering force” motivated particle system and the audience.
Starting with simple text fragments about very basic bodily
functions which are usually taking place without any relection
or conscious control, like blinking or breathing, the work combines different layers of text, human interaction, computational
process and perception.
Various algorithms (used to deine how particles interrelate
to each other) are waiting in line to successively hijack the parameters of granular synthesis in a Csound data set created for live
interaction. The particles roam, along apparently random coordinates, continuously trying to ind the best way in an ongoing
calculation process. In combination with concentric circles the
particles move towards each other simulating an emergent system and implying further ields of association.
1 Introduction
To use language for whatever purpose always means to use the
transformational apparatus of the language to refer to something
that already exists – ideas, attitudes and discourses. Language
structures perception and it is needed to connect with others, for
collective decisions and to establish a relationship between the
individual perception and the external world.
The linguistic codes, culture-speciic communication and collective understanding are both a blessing and a curse. The perceptual ilters of the mind are deining the frame of the thinkable - instead of having an immediate experience, we are putting
everything in its well known place.
Human perception works very well without knowing anything
about the complexity of biological and chemical processes which
are precondition and cause for any perception at all. You don’t
need the ability to explain what your eye is or does to see a tree
when it’s right in front of you.
The linguistical examination of breathing is not the same as
the act itself, it is nothing but a reference chain comprehensible
to any person familiar with the rules implied.
2 The Perfomance
The system starts with immediately mediated banalities. Very
basic bodily functions that are usually taking place without any
relection or conscious control, like blinking or breathing, are
being verbalized and transformed into soundscapes via granular
361
synthesis. The sound of the phrases itself is dismantled into spare
parts, divided into pieces and transformed.
The artiicial nature of the linguistic signs, the inner distance
between the object and what is happening (between the signiier and the signiied) is addressed to different levels of analyzing
and performing by the syntax of the sound and the sound of the
syntax.
In the course of this process the linguistic modules are gradually expanded – the expressions are moving away from the obvious, creating a ield of associations, leading to abstractions, producing ambiguities and focusing on its own deiciencies.
In analogy the audio synthesis is increasingly driven by the
data of the particle system generated by Processing.
The balance of power between human interactor and „steering
system“ is shifting into the direction of the particles. As the low
of language grows more complex, allowing relection on one hand,
making action more dificult on the other, the scope of action is
cut down for the sound artist. The performer is continuously losing control of each parameter that „steers” the granular synthesis
to the particles of the generative system.
2.1 Data Flow
The data of the Processing Sketch is transferred to the Csound
instrument to manage the parameters of granular synthesis. Here
it forms an aleatoric element gaining more and more control over
the process. The original sound (in this case, a language sample)
is dissolved and recombined; fractals unfold from micro-, mesoto macro-level.
Algorave
Rectangular Rotation
xCoAx 2015
Computation
Communication
Aesthetics
and X
Glasgow
Scotland
2015.xCoAx.org
Christian Faubel
Academy of Media Arts Cologne
[email protected]
Rectangular Rotation is an audiovisual performance based on a
modiied overhead projector, the ZoOHPraxiscope. Analog electronics are used to control motors and generate movement and
sound. These movements are projected and create an animated
shadow play that is always in sync with the sound. In addition the
projector light can be switch at hight rates, to create lickering
light and cinematographic animation of rotating picture discs.
364
365
366
Algorave Performance
xCoAx 2015
Computation
Communication
Aesthetics
and X
Glasgow
Scotland
2015.xCoAx.org
Martin Zeilinger
Ontario College of Art & Design
[email protected]
368
Algorave Performance
xCoAx 2015
Computation
Communication
Aesthetics
and X
Glasgow
Scotland
2015.xCoAx.org
Alex McLean
ICSRiM, School of Music, University of Leeds
[email protected]
370
Algorave Performance
xCoAx 2015
Computation
Communication
Aesthetics
and X
Glasgow
Scotland
2015.xCoAx.org
Shelly Knotts
Durham University
[email protected]
As code-fuelled ravers dance to wonky algorithmic glitches, an
onstage battle of wills will occur as the Algorave programmer
coerces SuperCollider’s JITLib into an inevitably noisy landscape
of deformed calculations and deviant beats.
A performance narrative is derived by searching twitter during
the performance for tweets containing the words “algorithm” and
“rave”. Scanning collected tweets for commonly occurring words
provides ad hoc and abstract performance directions and an artiicial social commentary on the performer’s genuinely antisocial
code play.
372
Algorave Performance
xCoAx 2015
Computation
Communication
Aesthetics
and X
Glasgow
Scotland
2015.xCoAx.org
Sam Aaron
University of Cambridge
[email protected]
374
Sam Aaron performing with
Christian Faubel’s visuals
Keynote
Evolution Art
and Computers Now
xCoAx 2015
Computation
Communication
Aesthetics
and X
Glasgow
Scotland
2015.xCoAx.org
William Latham
Goldsmiths, University of London
[email protected]
William Latham was one of the pioneering UK computer artists
and rapidly gained international reputation in the 1980s. His work
blends organic imagery and computer animation, using software
modelled upon the processes of evolution to generate three-dimensional creations that resemble fantastical other-worldly forms
such as ancient sea shells, contorted animal horns or organic alien
spaceships. His latest exhibition Mutator 1+2, produced in collaboration with mathematician Stephen Todd, features interactive
projections that blur the barriers between art and science.
377
Biographies
379
Sam Aaron
Sam Aaron is a live coder who considers programming as performance and
strongly believes in the importance of emphasizing, exploring, and celebrating creativity within all aspects of programming.
Sam is also the creator of the music live coding environment Sonic Pi, a new
musical instrument that uses code as its interface. Sonic Pi is both simple
and powerful; simple enough to use to teach introductory programming
and music within schools, yet powerful enough for professional artists to
perform with in nightclubs. By day, Sam is a Postdoc Researcher at the University of Cambridge Computer Laboratory and by night he codes music for
people to dance to.
http://twitter.com/samaaron
http://facebook.com/livecodersamaaron
Monty Adkins
Monty Adkins is a composer and performer of experimental electronic music. Since 2008, his compositions are characterised by slow shifting organic
textures often derived from processed instrumental sounds. Exploring a
post-acousmatic aesthetic, his music draws together elements from ambient, acousmatic and microsound music. His thinking is bound up in the
notion of nodalism, a concept expounded in a recent published paper delivered at the xCoAx Conference, Portugal (2014). Prof. Adkins is also active as
a writer and concert curator. His recent edited volume (with Prof. Michael
Russ) on the music of Roberto Gerhard was the irst scholarly edition produced on the composer’s work and includes chapters on Gerhard’s hitherto
unknown electronic music. Adkins directed an AHRC project based at the
University of Cambridge Library to recover and digitise Gerhard’s electronic music. These works are published for the irst time on CD/LP by sub rosa
(Belgium).
www.montyadkins.wordpress.com
Gabriella Arrigoni
Gabriella Arrigoni is AHRC funded PhD candidate in Digital Media at
Culture Lab (Newcastle University) where she is researching the notion
of artistic prototype and exploring the relationship between art practice
and labs. Former editor in chief on UnDo.net, the irst Italian network for
contemporary art, she has also published articles and essays on a number
of magazines, catalogues and web-platforms. She has curated exhibitions,
workshops and talks in galleries and not-for-proit spaces with a special focus on topics such as science, science-iction and co-creation.
380
Paul Atkinson
Paul Atkinson is an industrial designer, design historian and educator with
a PhD from the University of Huddersield. He is currently Professor of Design and Design History at Shefield Hallam University and has had articles
published in a number of international design journals. He has authored
two books on the design history of computers (Computer, Reaktion 2010,
and Delete: A design history of computer vapourware, Bloomsbury 2013),
and contributed a number of chapters to edited books. He has also written
about the future of the design profession and examined the future impact
of emerging technologies on the nature of design through practice-based
research into Post Industrial Manufacturing.
www.paulatkinsondesign.com
Gilberto Bernardes
Gilberto Bernardes received a Master’s in Music Performance from the Amsterdamse Hogeschool voor de Kunsten and the PhD in Digital Media from
the University of Porto under the auspices of University of Texas at Austin. He was awarded the Fraunhofer Portugal Challenge 2014 prize for his
PhD dissertation. Currently, he is a postdoctoral researcher at INESC TEC
where he pursues work on automatic music generation and an Associate
Professor at the Polytechnic Institute of Castelo Branco. Dr. Bernardes is
active as saxophonist, new media artist, and researcher in sound and music
computing.
Frédéric Bevilacqua
He is the head of the Sound Music Movement Interaction team at IRCAM.
His research concerns the modelling and the design of interaction between
movement and sound, and the development of gesture-based interactive
systems. He holds a master degree in physics and a PhD in Biomedical Optics from EPFL in Lausanne. He also studied music at the Berklee College
of Music in Boston and has participated in different music and media arts
projects. In 2003 he joined IRCAM as a researcher on gesture analysis for
music and performing arts.
ismm.ircam.fr/
381
Peter Beyls
Peter Beyls is an interdisciplinary artist developing generative systems in
music, the visual arts and hybrid formats. Beyls studied at the Royal Music Conservatory Brussels, EMS Stockholm, Ghent University and the Slade
School of Art, University College London. Current research interests include machine learning for interactive music systems and cognitive issues
in software for art. Beyls holds a PhD in Computer Science from the University of Plymouth, UK and is currently a full time research professor at
CITAR, Universidade Católica Portuguese, Porto and visiting professor of
Media Art at the School of Arts, University College Ghent.
Josh Booth
Josh Booth’s work explores modes of disorientation / orientation through
immersive, hyperactive sound and video. Both mediums exhibit various
cultural allusions - from noise and deep bass music to glitch aesthetics, retro / low res gaming, and concrete cinema. His pieces have been performed
and exhibited both internationally and in the U.S. Some notable events include (un)Scene Art Show (NYC), Mantis Festival (UK), Aperture Foundation (for Rinko Kawauchi’s “Ametsuchi” exhibition) (NYC) and the Prague
Quadrennial – Sound Kitchen (CZ). He has also worked as a coproducer/
cowriter with the hip hop group dälek on Ipecac Recordings since 1998. He
is currently a Ph.D candidate in composition at Rutgers University Mason
Gross New Jersey, where he studied under Charles Wuorinen and teaches as
a part- time lecturer.
www.youtube.com/channel/UCiVtT7MR9hOXgcFWTaHmiJw
soundcloud.com/joshuabooth-1
Victoria Bradbury
Victoria Bradbury @vbradbury is an artist weaving programming code,
physical computing, body and object. She is completing her PhD research
with CRUMB at the University of Sunderland, UK, and recently has implemented collaborative methods in forums such as workshops, art hacking
events, and residencies. These include the IMMERSION: Art and Technology workshops (Shanghai, China, 2012) and Digital Media Labs (Barrow, UK,
2014). Bradbury recently co-organized the Thinking Digital Arts // Hack
(Newcastle UK, 2014) and was a member of the British Council team for
Hack the Space, Tate Modern. (London UK, 2014).
www.victoriabradbury.com
382
Marcelo Caetano
Marcelo Caetano received the Ph.D. degree in signal processing from Université Pierre et Marie Curie - Paris 6 in 2011 under the supervision of Xavier Rodet, head of the Analysis/Synthesis group at IRCAM. He was Marie
Curie postdoctoral fellow with the Signal Processing Laboratory at FORTH
in 2012-2013. Currently, he is postdoctoral fellow with the SMC group at
INESC TEC and invited Assistant Professor at FEUP, University of Porto.
Dr. Caetano’s research interests range from musical instrument sounds to
music modeling, including analysis/synthesis models for sound transformation and music information retrieval.
Pedro Cardoso
Pedro Cardoso is a designer, and a gamer. He is also a researcher at ID+, Research Institute on Design, Media and Culture. He has a BA and MA on Design and is currently a PhD student on Art and Design both at the University of Porto, pursuing studies in video games in the context of new media
and interaction design, and developing experimental work in this scope. He
is currently a guest lecturer in the Department of Design at the University
of Porto, Portugal.
pcardoso.tumblr.com
Miguel Carvalhais
Miguel Carvalhais is a designer and a musician. He has a PhD on Art and
Design, by the University of Porto, and is an Assistant Professor in the Department of Design of the University of Porto and a researcher at ID+, chiefly focusing on interaction design and computational media and arts. He
collaborates with Pedro Tudela in the @c project, developing works in musical and audiovisual composition, music for theater, sound performances
and installations. In 2003 he founded the Crónica media label, that he has
been running since.
www.carvalhais.org
383
Alessio Chierico
Alessio Chierico is MA cadidate of the Interface Culture department in
the Art and Design University Linz. Former student of the art academies
of Urbino, Carrara and NABA in Milan. His artistic research is based on
the deconstruction of interfaces, looking for new naturalness formed by
the essential properties of material and immaterial objects. A large part of
his research focus on the aesthetics of digital representation. He had more
then ifty exhibitions, including: ArteLaguna prize in Venice, Ars Electronica festival, Museu Nacional de Arte Contemporânea of Lisbon, Victoria Art
gallery in Bucharest, Speculum Artium in Slovenia.
chierico.net
Ricardo Climent
Ricardo Climent is Professor of Interactive Music Composition at University
of Manchester, UK, where he serves as director of the NOVARS Research
Centre and as head of Composition. For the last few years his research has
focused on the potential of game-audio, physics and graphic engines for
compositional purposes, using ‘the aural’ as the primary source for navigation and exploration.
game-audio.org
electro-acoustic.com
www.novars.manchester.ac.uk
Stefanos Dimoulas
Stefanos Dimoulas was born in Volos, Greece in 1994. He joined the BA
Modern Ballet course of the Royal Conservatoire of Scotland in 2012 as a
full scholarship award student. Back in Greece he worked for Anti-thesis
Dance and Theatre Company and was chosen to dance in the production of
‘The Nutcracker’ by Universal Ballet of Korea. Since he has been in Glasgow
apart from having the lead male role in several dance videos, including
‘Something in my heart’ by Röyksopp, he was casted for one of the roles in
Matthew Bourne’s ‘Lord of the lies’ and was one of the two students who
were chosen to travel to Singapore with Scottish Ballet for the Youth Cultural Exchange Project as part of the Commonwealth Games.
www.stefanosdimoulas.com
384
Christian Faubel
Christian Faubel works at the lab3 – the laboratory for experimental computer science at the Academy of Media Arts Cologne. Till 2012 he worked
at the Institute for Neural Computation in Bochum, where he received his
Ph.D in electrical engi- neering in 2009. In his work he is interested what it
is that enables autonomous behavior? how complex autonomous behavior
may result from the interaction of very simple units and from the dynamics
of interaction between such units. he explores the assembly of simple units
into systems and the emergence of autonomous behavior both in artistic
and in scientiic research.
interface.khm.de/index.php/people/ lab3-staff/christian-faubel/
Emmanuel Flety
Emmanuel FLETY is a hardware engineer at IRCAM and part of the Music
Movement interaction team (ISMM).
He designs sensor electronics and platforms for gestural captation and recognition along with digital connected objects.
Jules Françoise
Jules Françoise is a postdoctoral researcher in the {Sound Music Movement}
Interaction team at Ircam, on the SkAT-VG european project. He holds a
Master’s Degree in Acoustics and a PhD in computer science from Université Pierre et Marie Curie, that he completed in the {Sound Music Movement}
Interaction team at Ircam. His research interests intersect human-computer interaction and machine learning with a focus on expressive movement
and its interactions with sound. He is interested in understanding and designing interactive systems exploring movement-sound interaction, with
applications to performance, interactive soniication and rehabilitation.
julesfrancoise.com
385
Pete Furniss
A professional clarinetist for over 20 years, Pete’s career has covered a
broad spectrum of musical activities and interests, from concerts with orchestras and ensembles to free improvisation, arranging and conducting,
education at all ages and standards, and more recently, solo performances with electronics. The emergence of ‘electro-instrumental’ or ‘live electronic’ practice since the 1960s has afforded signiicant augmentation to
traditional acoustic instruments in terms of timbre, harmony, pitch range
and spatiality. Pete’s work explores the relationships between human presences, machine intervention and considerations of space in live electronic
‘performance ecosystems’, and asks questions about identity, agency and
perception in music making as a whole.
Julien Groboz
Independent designer. He obtained his industrial creator diploma in 2012
from The ENSCI Les Ateliers with honors for « the research process », Paris,
France. In his personal researches he explore different ways to approach
uses and shapes by the experimentation of basic interactions with objects
and motion principles. In 2014 he obtained the project assistance of the
VIA for the Oo lamp that has been exhibited during the shows « Maison &
Objet »(Villepintes, France) and « France Design »(studio più, Milano, Italiy). In collaboration with the IRCAM institute, he participates to the Legos
project that studies Sound-gesture relationships.
www.juliengroboz.fr
Rainer Groh
Prof. Dr.-Ing. habil. Rainer Groh is teaching media- and interaction design
at the Technische Universität Dresden. His main research areas are the theory and methodology of interactive imagery and Visual Engineering. The
interdisciplinary research addresses the rapid development of hardware
and software systems by establishing novel kinds of interfaces between human and computer. The focus is on user and situation aware data visualization with the help of innovative technologies such as gestural interaction,
autostereoscopy, and gaze tracking.
mg.inf.tu-dresden.de/mitarbeiter/rainer-groh
386
Thomas Gründer
Thomas Gründer is a research associate at the Technische Universität Dresden. He is working in the project DAMM, sponsored by the DFG. His main
research topics include the Development, Evaluation and Methodology of
user interfaces. One core research area is related to the interaction in augmented reality scenarios using gestures, gloves and wearables and the expressiveness of diversity human motion.
mg.inf.tu-dresden.de/mitarbeiter/thomas-gründer
Edwin van der Heide
Edwin van der Heide is an artist and researcher in the ield of sound, space
and interaction. He extends the terms composition and musical language
into spatial, interactive and interdisciplinary directions. His work comprises
installations, performances and environments. In his pieces, the audience
members are placed in the middle of the work and challenged to actively
explore, interact and relate themselves to the artwork. Besides running his
own studio, he is a part-time Assistant Professor at Leiden University (Media Technology MSc programme) and heading the Spatial Interaction Lab at
the ArtScience Interfaculty of the Royal Conservatoire and Royal Academy
of Art in the Hague.
www.evdh.net
Rodrigo Hernández
Rodrigo Hernández-Ramírez was born in Mexico City in 1982. In 2006 he
obtained a BA in social communications. After a few years working as a
production and photography assistant, and as a web designer, he decided to
pursue an MA in photography at the Faculty of Fine arts in Lisbon. Currently he is completing his Ph.D. dissertation at the same institution.
His primary research interests include media studies, photography, visual
hermeneutics, philosophy of science and philosophy of technology.
His future research plans involve analysing the cultural impact of information technology and the inluence of pragmatist discourses on current
media analysis.
387
Olivier Houix
He obtained a PhD degree in acoustics in 2003 from the Université du
Maine, Le Mans, France. His research interests concern the perception of
environmental sounds and the gesture-sound relationship applied to sound
design. He teaches sound practice at the superior school of Fine Arts TALM
Le Mans and in the master of sound design. He is member of the Sound
Design and Perception Team at Ircam, where he participated to national
and European projects such as CLOSED. He participates to the Legos project that studies Sound-gesture relationships and to the Skat-VG european
project (Sketching Auditorily with Vocalizations and Gestures).
pds.ircam.fr/864.html
Mark Hursty
Mark Hursty is a researcher at the National Glass Centre (NGC), University of Sunderland, UK. His project concerns reviving and reinterpreting
pressed glass, a serial mass production technique, for sculptural use. He
also combines interdisciplinary methods to create multimedia installations
and performances, including Metabellum (2011). CAD/CAM is integral to
Hursty’s work, which uses digital fabrication to create elaborate moulds for
pressing molten glass. He fouWWWnded Hurstin Studio Glass and Metal
in Plymouth, MA, USA (1999), and was Glass Program Head at Jacksonville
University, USA (2008-10). Hursty is a Fulbright award recipient (Beijing,
2011) where he spent 16-months at Tsinghua University in. He continues
to research, collaborate and teach within Universities and private studios
in China.
www.markhursty.co.uk
Jung In Jung
Jung In Jung is a sound artist who has been collaborated with contemporary
dancers considering how to present sound composition in interesting ways
with interactivity. Jung is from South Korea, and was educated at London
Metropolitan University where she specialised sound and media. She started using computer technology to create installations and performances
while she was doing her master’s degree at the University of Edinburgh. She
has completed an artist in residency programme for one year in New York
City in 2014, and returned to the UK to pursue her PhD in Music Technology
at The University of Huddersield.
www.junginjung.com
388
Paul Keir
Dr. Paul Keir is a lecturer in the School of Engineering and Computing at
the University of the West of Scotland. Previously, Paul led the research and
development group at Codeplay Software Ltd. in Edinburgh; with responsibility for Codeplay’s 2 EU FP7 projects: LPGPU and CARP. Prior to that,
Paul gained 10 years of professional experience developing video games
and interactive graphical applications in both research and commercial environments. Paul has an M.Sc. in 3D Computer Graphics and an M.Sc. in
High Performance Computing (HPC); and completed a Ph.D. in Computer
Science at the University of Glasgow in 2012, on the topic of heterogeneous
multicore compilers.
Damián Keller
Damián Keller (DMA Stanford University 2004; MFA Simon Fraser University 1999) is associate professor at the Federal University of Acre, Brazil,
where he coordinates the Amazon Center for Music Research (NAP - Núcleo
Amazônico de Pesquisa Musical). Member and cofounder of the Ubiquitous
Music Group (g-ubimus), his research focuses on ecologically grounded creative practice and ubiquitous music. He coauthored the volumes Ubiquitous
Music (Springer) and Musical Creation and Technologies (ANPPOM Press).
His musical output features the application of ecocompositional techniques
in theater, ilm, electroacoustic and installation artworks.
ccrma.stanford.edu/~dkeller
Shelly Knotts
Shelly is a data-musician who performs live-coded and network music internationally, collaborating with computers and other humans. She has received several commissions and is currently part of Sound and Music’s ‘New
Voices’ emerging-composer development scheme. She is currently studying
for a PhD with Nick Collins and Peter Manning at Durham University. Her
research interests lie in the political practices implicit in collaborative network music performance and designing systems for group improvisation
that impose particular (anti)social structures. As well as performing at numerous Algoraves and Live Coding events, current collaborative projects
include network laptop bands BiLE and FLO (Female Laptop Orchestra), and
live coding performance [Sisesta Pealkiri] with Alo Allik.
shellyknotts.co.uk/
389
Nicole Koltick
Nicole Koltick is an Assistant Professor in the Westphal College of Media
Arts & Design at Drexel University and the founding Director of the Design
Futures Lab where she leads a graduate research group in critical design
practices investigating the intersection of artiicial intelligence, robotics,
ethics, design and aesthetics. Nicole writes extensively on the philosophical and theoretical implications concerning concepts of the “natural”, the
“synthetic”, aesthetics, the rapidly evolving digital landscape and implications of emerging computational ecologies. She has recently completed
papers on dark data, aesthetics of emergence, materiality and agency in the
future and speculative realist approaches to design.
www.designfutureslab.com
Gordan Kreković
Gordan Kreković is a PhD student at the Faculty of Electrical Engineering
and Computing, University of Zagreb. The focus of his research is on applying artiicial intelligence techniques to sound synthesis in order to achieve
an intuitive interface for controlling the sound synthesis process. Such an
interface could help musicians to be more eficient and creative.
Outside the academic life, Gordan works as a manager in a software development company and spends his free time composing music and playing
keyboards.
Linda Kronman
Linda Kronman is a media artist and designer from Helsinki, Finland currently living and working in Linz, Austria. In her artistic work she explores
interactive and transmedial methods of storytelling with a special focus
on digital iction. In connection with her studies at the MediaArtHistories
program in Danube University Krems, she explored participatory ways to
experience and archive social media iction. She is interested in participatory art and design practices, specially in connection to creative activism.
She has organized several participatory workshops and attended international exhibitions including Moscow Young Arts Biennale, Siggraph ASIA,
NEMAF and WRO Biennale, xCoAx.
www.kairus.org
390
Alessandro Ludovico
Alessandro Ludovico is an artist, media critic and chief editor of Neural
magazine since 1993. He received his Ph.D. degree in English and Media
from Anglia Ruskin University in Cambridge (UK). He has published and
edited several books, and has lectured worldwide. He’s one of the founders
of Mag.Net (Electronic Cultural Publishers organisation). He also served
as an advisor for the Documenta 12’s Magazine Project. He teaches at the
Academy of Art in Carrara. He is one of the authors of the Hacking Monopolism trilogy of artworks (Google Will Eat Itself, Amazon Noir, Face to
Facebook).
neural.it
Dane Lukic
Dane Lukic is a Lecturer in Organisational Behaviour and a dancer/performer originally from Bosnia and Herzegovina and currently based in London.
Dane combines his academic teaching and research with analysis in dance
and improvised performance. He has performed with a number of UK and
international dance companies and choreographers including RuthMillsDance, Dance House, Tanztheater, Secret Garden Party and Creative Scotland. Dane is trained in contemporary dance with particular focus on contact improvisation. He is particularly interested in boundary space between
different ields - between disciplines, between cultures, between styles of
dance, between art forms and between dance and non-dance space.
Thor Magnusson
Thor Magnusson’s background in philosophy and electronic music informs
proliic work in performance, research and teaching. His work focuses on
the impact digital technologies have on musical creativity and practice,
explored through software development, composition and performance.
Thor’s research is underpinned by the philosophy of technology and cognitive science, exploring issues of embodiment and compositional constraints
in digital musical systems. He is the co-founder of ixi audio (www.ixi-audio.
net), and has developed audio software, systems of generative music composition, written computer music tutorials and created two musical live
coding environments. Thor Magnusson lectures in Music and convenes the
Music Technology programme at the University of Sussex.
391
Alex McLean
Alex McLean is a live coder and research fellow based in ICSRiM,
School of Music, University of Leeds. His current projects include the
AHRC Weaving Codes, Coding Weaves project with the Copenhagen Centre
of Textile Research and FoAM Kernow, the AHRC Live Coding Research
Network with the University of Sussex, and numerous performance
collaborations including Slub (http://slub.org), Canute (http://canute.
lurk.org) and Sound Choreographer <> Body Code (http://blog.sicchio.
com/?page_id=350). He is active in the digital arts, co-organising Dorkbot
in Shefield and London, co-founding the TOPLAP and Algorave movements, chairing international conferences including on Live Interfaces and
Live coding, and currently edits the Oxford Handbook on Algorithmic Music with Prof Roger Dean.
yaxu.org
Alena Mésárošová
Alena Mésárošová (architect) founder member of interdisciplinary art group
Manusamo & Bzika (created by visual artist Manuel Ferrer Hernandéz and
Alena Mésárošová) focused on the creation of interactive installations involving the use of Augmented Reality and 3D modelling. Started in 2006,
the group has produced AR creative work for numerous festivals and projects in Slovakia, Italy, Spain and Portugal. Alena holds a degree in Ingenier-Architecture (Inzinier architekt) at the Fakulta Umení, Technická
Univerzita v Košiciach, Slovakia as well as the Bachelor (Bakalár) and is
currently pursuing PhD at the Polytechnic University of Valencia. She has
lectured at San Gregorio de Portoviejo University, Manabí. Ecuador.
manusamoandbzika.webs.com
Evandro Miletto
Evandro Manara Miletto is an Associate Professor at the Federal Institute
of Education, Science and Technology of Rio Grande do Sul (IFRS), Porto
Alegre Campus, Brazil, where he is the leader of the Applied Computing
Research Group. Since 2002 he has conducted multidisciplinary research
on topics such as Computer Music, HCI, and CSCW. He is a member of the
technical committee of Brazilian Symposium on Computer Music (SBCM)
as well as the Brazilian Symposium on Computers in Education (SBIE). His
research is currently based on networked music interactions focused on
novice users.
392
Nicolas Misdariis
He is a research fellow and the co-head of Ircam / Sound Perception and Design team. He is graduated from an engineering school specialized in mechanics (CESTI-SupMeca); he got his PhD at CNAM (Paris) on the following
topic: synthesis/reproduction/perception of musical and environmental
sounds.In 1999, he contributed to the constitution of the Ircam / Sound Design team where he has mainly developed works related to sound synthesis,
diffusion technologies, environmental sound ans soundscape perception,
auditory display or interactive soniication. Since 2010, he is also a lecturer
within the Master of Sound Design in the Fine Arts school at Le Mans.
pds.ircam.fr/862.html
Mathias Müller
Mathias Müller is a research associate at the Technische Universität Dresden. Within the research project StereoAge, sponsored by the German
Federal Ministry of Education and Research , he is working user-centered
design of stereoscopic visualizations. Additional research area include gesture-based and tangible interaction in virtual environments. The focus is
on unrevealed potentials of real-time computer graphics and novel tracking
technologies to improve the user experience of interactive 3D visualizations
by utilizing concepts from art and photography for innovative visualization
techniques and psycho-physiological principles to adapt the machine to the
needs and abilities of human beings.
mg.inf.tu-dresden.de/mitarbeiter/dipl-medieninf-mathias-müller
Nuno Otero
Nuno Otero is a Senior lecturer at CeLeKT and is interested in theories and
conceptual frameworks in HCI, from more traditional approaches taking
a user centred perspective to more recent trends focusing on user’s experiences with technologies. In a nutshell, the question driving my research
concerns the understanding of how the properties of distinct devices, computational artefacts and embedded external representations impact
on people’s activities (from work related activities to educational and ludic contexts). Furthermore, he is also keen on understanding how distinct
methodologies suit the investigation of different issues along the artefact
design cycle and how the design solutions can the documented and reused
by design teams.
393
Mark Pilkington
Mark Pilkington is a composer and performer of electroacoustic music. His
practice encapsulates both sound and image as a means to extend spatial
imaginings between real and virtual space. The coupling of sound and
image are applied to electroacoustic music, site-speciic installation and
screen-based works. Forging immaterial and creative labor through a network of interwoven and augmented territories, his work increasingly queries the way operations carry great critical and creative potential. Seeking new modes of critical engagement that incorporate multiple narratives
through non-digital and digital aesthetic informs the direction of his pedagogy. His theoretical research focuses on the relationship between artistic
genres and their respective aesthetic theories with reference to: electroacoustic music, sound synthesis, visual music, coding, philosophy, and ilm.
His practice specially focuses on audio-visual composition using real and
virtual entities as a means to explore time and space. His work have been
performed, exhibited and screened at conferences and festivals throughout
the UK and Europe. Collaborative interdisciplinary work is carried out with
composes and visual artist/s. His work has been performed and screened at
ICMC, ARS Electronica, MANTIS festival and Open Circuit Festival.
Raul Pinto
Born in 1978, in South Africa, studied in Portugal and opened his own design studio focused on interior and product design (Aveiro Meu Amor) after
graduating in Product Design (ARCA-EUAC). In 2009 received his M.Sc. in
Engineering Design (IST) and in 2011 the title of Specialist in Product Design (IPG; IPL; IPVC). Working as a lecturer since 2010: IPG, IPV, UA and at
the moment at the department of Industrial Design of the faculty of Fine
Artes and Design in the Izmir University of Economics, he is currently a
PhD candidate at the University of Aveiro where he is studding and working
with biological generative systems, looking for alternative production tools
aimed at customization.
www.paulatkinsondesign.com
394
Renate Pittrof
Renate Pittroff works as a freelance director in the areas of experimental
theater and acoustic art (audio drama, radio art, sound installation). She
is a lecturer at the Department of Theatre, Film and Media Studies, University of Vienna Since 1995, she designed and directed the projects of
“theaterverein meyerhold unltd.” In the areas of radio plays and acoustic
art she works primarily with Austrian authors like Friederike Mayröcker,
Peter Rosei, Franz Schuh, Brigitta Falkner and Lisa Spalt. In recent years,
she presented some art projects that deal with interactive methods. The
result was “inalbluten” an interactive radio sound installation or projects
“bm dna: Ministry of DNA Hygiene, Department: Hair - a theatrical usurpation”, “Tracker Dog” and “Samenschleuder”, last: “Re-Entry - life in the
Petri dish. Opera for Oldenburg” 2010.
www.wechsel-strom.net
Antonio Pošćić
Antonio Pošćić is a PhD student at the Faculty of Electrical Engineering and
Computing, University of Zagreb and working as a software engineer and
team lead at Enghouse Interactive.
His research interests include innovative techniques within the ield of specialized visual programming languages for artistic applications, especially
sound synthesis and music creation. Motivated by the love of experimental
and avant-garde musical forms, the aim of both his academic research and
music criticism hobby is to help better understand and improve the link
between technology and art.
André Rangel
Diletant, Intermedia Artist/Designer, founder and art director of 3kta project since 2003. Develops compositional structures between media. His research interests range from computable algorithms and geometry to semiotics and philosophy. Holds a PhD in Interactive Art, a MA in Digital Arts
and a degree in Communication Design. He is guest assistant professor at
Faculty of Fine Arts of the University of Porto and regularly lectures at Portuguese Catholic University where he was researcher at CITAR — Research
Centre for Science and Technology of the Arts..
3kta.net
www.i2ads.org/nai/author/amacedo/
395
Shamik Ray
Shamik is an interaction designer with a focus on creating tools that let
people express, create and communicate better. He is currently based in in
London, UK where he is helping Barclays design future experiences around
money. Having dabbled in music, photography, computer programming and
startups, he loves projects where these passions collide. He is a graduate of
the Copenhagen Institute of Interaction Design and believes in designing
not only ‘for’ people but ‘with’ them, putting a lot of emphasis on prototyping from an early stage. His favourite materials are paper and wood and off
late code and electronics.
cargocollective.com/shamik
Helen E. Richardson
Helen E. Richardson, Director, Performance and Interactive Media Arts
MFA, Brooklyn College, http://wp.pima-mfa.info. Creative work focuses on
collaborative creation and social engagement. Formerly Artistic Director of
the Stalhouderij Theatre Company, Amsterdam, an international ensemble
creating new works, and recognized for best productions of the year in the
Netherlands on themes exploring the encounter between the ‘old’ and ‘new’
world, women’s rights, and economic disparity. Founding member, co-curator, producer and dramaturg, Global Theatre Ensemble’s theatre project
on Eliminating Violence Against Women, commissioned by the United
Nations. Author of various chapters on the collaborative practices of the
Théâtre du Soleil published by Routledge.
Matt Roberts
Matt Roberts is a new media artist specializing in real-time video performance and new media applications. His work has been featured internationally and nationally, including shows in Taiwan, Brazil, Canada, Argentina, Italy, Mexico and nationally in New York, San Francisco, Miami,
and Chicago. He has shown in several new media festivals including FILE,
Zero1, 404, CONFLUX, and he recently received the Transitio award from
the Transitio_MX Festival in Mexico City. He is the founder of EMP: Electronic Mobile Performance, and an Associate Professor of Digital Art at
Stetson University.
mattroberts.info
396
Soia Romualdo
Soia Romualdo is a museologist, curator and researcher on the topic of videogames as an art form. She has a Master’s degree in Museology and Curatorial Studies from the University of Porto, with a dissertation titled “Play,
Games and Gamiication in Contemporary Art Museums”. She completed a
year-long curatorial internship at the Serralves Museum of Contemporary
Art (Portugal), where she worked on the exhibition “Monir Shahroudy Farmanfarmaian: Ininite Possibility. Mirror Works and Drawings 1974-2014”.
She currently works as a museologist, writer and project assistant for the
permanent exhibition project at Casa da Memória in Guimarães (Portugal),
due to open in late 2015.
thecuriouscurator.com/
Tom Schoield
Tom Schoield is an artist, designer and researcher based at Culture Lab,
Newcastle, UK. His practice-based research spreads across creative digital
media, archives and collections interface design / visualisation and physical computing. His PhD thesis explored the role of technological materiality in developing works of art and design as part of ecologies of experience.
Recent creative projects include Me_asure (with John Bowers) an interactive installation which combines 19th C pseudo-science with contemporary face tracking technology (http://tomschoieldart.com/me_asure-withJohn-Bowers), Neurotic Armageddon Indicator, a wall clock for the end of
the world (http://tomschoieldart.com/Neurotic-Armageddon-Indicator)
and ’null by morse’, an installation with vintage military equipment and
iPhones (tomschoieldart.com/null-by-morse).
Hanna Schrafenberger
Hanna Schraffenberger is a PhD candidate and researcher at the Media
Technology Research Group at Leiden University. Her research interests
include human-computer interaction, interactive art and augmented reality (AR). Her PhD research examines the fundamental characteristics and
potential manifestations of AR. In particular, her research explores those
unique AR scenarios and experiences that have no equivalent in a purely
physical world. Besides doing research, Hanna is interested in communicating scientiic topics to a broader public.
www.creativecode.org
397
Laurent Segretier
Laurent Segretier was born in 1978 in Guadeloupe in the French Carribean, and currently splits his time between Hong Kong and Paris. He is
responsible for some of the most radical and affecting work currently being
produced. During his studies at business school in France he was an active
member of his school’s multimedia club, as well as a young ilmmaker and
electronics geek working in after-sale computer shops. Always fascinated
with Asia, he moved to Beijing after graduation, and has been established
in Hong Kong for eight years. Despite the possibility of a career in inance,
he followed his passion by working as irst assistant to one of the greatest
photographers in Asia, Wing Shya. He has found a balance between Asia,
where he was led by memories of the pop culture of his childhood, and Paris, where he connects with the masters of the history of modern art.
www.segretier.com
Emilia Sosnowska
Emilia Sosnowska is a freelance curator and media researcher with particular interest in emerging technologies and their impact on creative practices. She is currently undertaking her interdisciplinary PhD at the University
of the West of Scotland. Her main research interest
concerns multi-sensory experience in digital art. MA in Culture Theory Wroclaw University, Poland, 2004. MFA in Visual Culture - Edinburgh College of Art/Edinburgh University, 2010. Ph.D. Candidate: New Media Arts
and Creative use of Technology – University of the West of Scotland.
Luke Sturgeon
Luke Sturgeon is a London based designer and researcher who works in the
area between science, technology and humanity. His work provides new
perspectives on technological research and development through the investigation of contemporary culture in relation to emerging technologies.
He often use utilises existing or generated data and recorded media to create digital prototypes, installations, ictional narratives, hyperrealities,
and speculative design proposals that provoke individual and social relection and critical discussion. He is a 2013 Alumni from the Copenhagen Institute of Interaction Design (CIID) as well as currently pursuing an MA in
Design Interactions from Royal College of Art.
lukesturgeon.co.uk
398
Patrick Susini
Patrick Susini received a PhD degree and a Habilition in Psychoacoustics.
He is the head of the Sound Perception and Design group at IRCAM. His
activities include research on loudness and everyday sound perception,
and applications in sound design. He has coordinated several indus- trial,
national (ANR) and European (FP7) projects. He teaches psychoacoustics,
psychophysics, and sound design.
pds.ircam.fr/861.html
Christoph Theiler
Christoph Theiler (1959/BRD) lives in Vienna since 1982. Working as freelance composer and media artist. His last works are established in the area
multimedia and sound installation. GATE II+III are the works, in which new
forms of interactive sound design were developed. As in the case of MEMBRAN II (for e-guitar, sax and medium wave transmitters), M.O. - HERZ +
MUND (sound installation with 3 bass loudspeakers and very low frequency waves) and HF 114 (electronic composition for 7 transmitters) more and
more means from the area of the electronic music, the sound design, the
high-frequency engineering and the internet are included in his artistic
conception. The electronic composition NEARNESS was published on the
“Sonic Circuit” festival CD 2001. He got the composition price of the city
of Stuttgart (1982) and the composition price “Luis de Narvàez” Granada
(1993) for the 1st and 2nd string quartet. Recordings made by WDR, ORF,
Deutschlandradio, radio Koper, Ljubljana-TV and BR? Compositions for
chamber ensemble, orchestra, electronics, theatre and radio play.
www.wechsel-strom.net
Brad Tober
Brad Tober, an Assistant Professor of Graphic Design at the University of Illinois at Urbana-Champaign (USA), is a designer, educator, and researcher
whose work explores the potential of emerging code-based and interactive visual communication technologies, with the objective of identifying and investigating their relationships to design practice and pedagogy. His practice-led
research entity, the Experimental Interface Lab, is characterized by a speculative approach to design (a manifestation of pure research) that recognizes that
forms of and methodologies for contemporary practice that spans design and
technology are best developed through fundamentally lexible and exploratory
processes. Brad holds an MDes from York University (Toronto, Canada), a BFA
in graphic design from the Savannah College of Art and Design (USA), and a BA
in mathematics from the University at Buffalo (USA).
bradtober.com/
experimentalinterfacelab.org/
399
Graeme Truslove
Graeme Truslove is a composer and performer based in Glasgow, Scotland. His output includes: sonic and audio-visual compositions, and improvised music – playing guitar and/or laptop in various solo and collaborative projects. His work is largely concerned with conlicts between intuitive
performance and the ixed-medium, often integrating microtemporal and
immersive approaches to sound creation and performance. Truslove has
performed and exhibited his work internationally, and has attracted awards
from: Metamorphoses (1st prize), The Salford Sonic Research Commission,
Creative Scotland, The British Council, PRSF, The Dewar Arts Award and
others. He holds both an M.Eng in Electronics with Music, and a Ph.D. in
Music Composition from the University of Glasgow. He currently lectures
in Composition and Music Technology at the University of the West of
Scotland.
www.graemetruslove.com
Katharina Vones
Originally from Cologne, Germany, Katharina Vones completed her
BA(Hons) in Design and Applied Arts at the Edinburgh College of Art in
2006, and gained an MA RCA from the Royal College of Art, London, in
2010. She subsequently established her practice as an independent jewellery artist and researcher in the Scottish capital of Edinburgh, and as such
has exhibited and presented her work widely both nationally and internationally. She is currently in the inal year of her PhD at the University of
Dundee, where she investigates the use of smart materials and microelectronics in the creation of stimulus-responsive jewellery.
www.smart-jewellery.com
www.kvones.com
Astrid Wegner
Astrid Wegner (*1984) studied Literature, Media Studies, Sociology and
Cultural Studies in Marburg, Mainz and Hagen. She is currently exploring
semi-automatic, half-conscious and intertextual forms of writing – looking
for narratives that are genuine and reasonable in an attempt to merge different carriers of meaning. She is particularly interested in text forms that
encourage active perception and can be adapted to work with audiovisual
systems. She lives and works in Freiburg, Germany.
400
Ephraim Wegner
Ephraim Wegner (*1980) studied audiovisual media at KHM in Cologne and
is currently teaching generative art and audiovisual media at the university in Offenburg. As an artist he uses various computer languages (like
Csound, Pure Data and Processing) to combine different forms of digital
audio synthesis and generative art, “steering” towards multidisciplinary
approaches and concepts. His performance practice ranges from improvisation (preferably using live input from instrumentalists) and notated works
up to algorithmic compositions. Up to now there were numerous cooperations with other musicians, ensembles, festivals and institutions, among
others ZKM (Karlsruhe), “ars acustica” (SWR2), “Acht Brücken Festival“
(Cologne) and „Donaueschinger Musiktage“. In 2015 he received a scholarship from Kunststiftung Baden-Württemberg.
Terri Witek
Terri Witek is the author of Exit Island, The Shipwreck Dress ( both Florida
Book Award Medalists), Carnal World, Fools and Crows, Courting Couples
(Winner of the 2000 Center for Book Arts Contest) and Robert Lowell and
LIFE STUDIES: Revising the Self, as well as a recent comic book/ poetry
chapzine, First Shot at Fort Sumter/Possum. Her poetry has appeared in
Slate, The Hudson Review, The New Republic, The American Poetry Review, and other journals, and she s the recipient of fellowships from the
MacDowell Colony, Hawthornden International Writers’ Retreat, and the
state of Florida. A native of northern Ohio, she teaches English at Stetson
University, where she holds the Sullivan Chair in Creative Writing.
terriwitek.com
James Wyness
James Wyness is a composer with an interest in abstracting sounds from
their generators, examining the morphological features of complex sounds
in accordance with their inherent materiality, observing and structuring
the birth and evolution of these forms as they unfold and in directing this
complexity into music. He has an MA Honours degree in French Studies
and a PhD in electroacoustic composition from the University of Aberdeen.
Current research interests include semiology, anthropology, the origins of
language and the history of language development, sociolinguistics, social
theory, cultural theory and the political economy of the arts, morphology,
determinism/randomness, set theory, topology, dynamic systems and evolutionary biology.
401
Martin Zeilinger
Martin Zeilinger holds a PhD in Comparative Literature from the Univ of
Toronto and is a researcher, curator, and practitioner in the areas of media
studies and media theory, focusing on contemporary art, games, and intellectual property issues. He is also co-director of the Toronto-based Vector
Game Art & New Media Festival. Having performed with the Vienna-based
improv rock band Thalija for many years, recently he has begun a solo practice live coding in Sonic Pi.
marjz.net/
Andreas Zingerle
Andreas Zingerle is a media artist from Austria. He is a PhD candidate at
the Timebased and Interactive Media Department in Linz (Austria). He is
researching vigilante online communities of scammers and anti-scam activists and implements their mechanics in interactive narratives and creative media workshops. In the last years he worked on several installations
exploring a creative misuse of technology and alternative ways of Human
Computer Interaction. Since 2004 he takes part in international conferences and exhibitions, among others Ars Electronica Campus, Siggraph,
Japan Media Arts Festival, File, WRO Biennale, Steirischer Herbst, xCoAx.
www.andreaszingerle.com
www.kairus.org
This project is partially funded by FEDER through the
Operational Competitiveness Program – COMPETE – and
by national funds through the Foundation for Science and
Technology – FCT – in the scope of projects PEst-C/EAT/
UI4057/2011 (FCOMP-Ol-0124-FEDER-D22700) and PEst-OE/
EAT/UI0622/2014.
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement