why live? performance in the age of digital reproduction || on missing the live of the everyday...
TRANSCRIPT
On Missing the Live of the Everyday AmbientAuthor(s): Natalie BellSource: Leonardo Music Journal, Vol. 18, Why Live? Performance in the Age of DigitalReproduction (2008), pp. 43-44Published by: The MIT PressStable URL: http://www.jstor.org/stable/25578119 .
Accessed: 14/06/2014 08:39
Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at .http://www.jstor.org/page/info/about/policies/terms.jsp
.JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range ofcontent in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new formsof scholarship. For more information about JSTOR, please contact [email protected].
.
The MIT Press is collaborating with JSTOR to digitize, preserve and extend access to Leonardo Music Journal.
http://www.jstor.org
This content downloaded from 185.2.32.106 on Sat, 14 Jun 2014 08:39:44 AMAll use subject to JSTOR Terms and Conditions
ments audibly and inaudibly vibrated
the strings and soundboard, thereby
transducing the signals from my rig: two wide-band radio receivers and a
MIDI synthesizer. They were, in turn,
processed and matrix-routed through a Max/MSP patch to the transducer's
amplifiers.
Improvising on such an instrument
is less like conversation and more like
being in the same head. Each perform er's actions inside the piano affected
those of the other, reinforcing, negat
ing or
bending each conversational
element such that the resulting utter
ance was not a completely individual
event but rather a composite expression
with a singular, hybrid voice. At each
moment, the acoustic parameters of the
instrument changed, and actions did
not always have the predicted outcome.
The nature of the preparations and the
resultant modes of interaction engen dered an organic, living piano that was
both a challenge and a
pleasure to play. In order to maintain the life of the
instrument and the performances, the resultant eight recordings will be
released as superimposable tracks,
inviting the listener to freely choose the
number of recordings playing at any
one time and to tweak other param eters such as volume and individual
start times. Playing two of the eight
recordings simultaneously results in
a quadraphonic experience with 28
possible combinations. If the listener
chooses more, the overall complexity of the experience increases, but the
possible combinations are reduced.
The available permutations may not
be as numerous as in live performance, but are certainly more than that of a
single track, sustaining the vivacity of
the performance to satisfy a healthy
wanderlust.
Brett Ian Balogh is an artist and instructor
currently teaching at the School of the Art In
stitute of Chicago and the Illinois Institute of
Technology. Balogh is also a member of the im
provisational sound collective Clairaudient.
On Missing the Live of the Everyday Ambient Natalie Bell. E-mail: <natalie.a.bell@
gmail.com>. Web site: <www.nataliebell.
org>.
Sound examples related to this article are
available at <www.nataliebell.org/sounds>.
When John Cage introduced ambient
sounds as musical material, music was
no longer
a creation of the performer but one of the listener. Following his
precedent, I understand performance as
prescribing a way of listening?and of listening to ambient sounds in par ticular. My performance is minimal
and mute. With a tap on the shoulder, I
silently interrupt the subway passengers who are
listening to their ear buds and
pass them a small flier that encourages them to download and listen to my
podcast, Sounds Like, while riding the
subway. My compositions for the pod cast consist of my own field recordings of the subway, layered with old ethno
graphic field recordings and various
20th-century avant-garde works as well
as contemporary popular music. The
predominance of subway recordings,
however, is primary to my goal: that lis
teners learn to open their perception to
the aesthetic qualities of these ambient
sounds and appreciate the experience of collective listening practices in public
spaces. In contemporary culture, not only
the concert hall is obsolete. Still more
impertinent are the casual perfor mances of the urban everyday?namely,
Soto^Brltt to fitfj""
?f Pia"? Preparati0nS USCd m <**"*?* " ?* recent project, *??*, !?*?*?.
Artists' Statements 43
This content downloaded from 185.2.32.106 on Sat, 14 Jun 2014 08:39:44 AMAll use subject to JSTOR Terms and Conditions
subway musicians and the ambient
soundscapes that subsume them. The
use of a podcast
as media?though
somewhat elementary?is a precise
element of my endeavor. In reincor
porating the ambient sounds that
would otherwise be live into a listener
controlled private performance in the
collective public space of the sounds'
origin, I invite the listener to investigate her capacity to aestheticize the famil
iar and locally present sounds and to
reconsider how listening practices with
respect to one's everyday surround
ings are cultivated or
neglected. As a
prelude to other works involving sound
environments and technology, my pod cast is a
simple exercise in engaging the contemporary listening practices that are
privately motivated and often
focused on excluding ambient sounds.
Natalie Bell's recent pieces can be heard at
<www. nataliebell. org/sounds>.
Collaborative Creation, Live Performance and Flock
Jason Freeman, Music Department,
Georgia Institute of Technology, Col
lege of Architecture, 840 McMillan
Street, Atlanta, GA 30332-0456, U.S.A.
E-mail: <[email protected]>. Web site: <www.jasonfreeman.net>.
Photos, sound and video examples related to
this article are available at <www.jasonfree
man.net/flock>.
In Flock (2007), my recent full-evening work for saxophone quartet, dancers,
electronic sound, video and audience
participation, I attempt to reconcile the
growing cultural shift toward collabora
tive models of content creation with the
one-to-many model of music creation
and dissemination that has tradition
ally dominated live performance. We
attend live concerts, in part, because
we want to participate in a unique,
spontaneous musical experience and
to share that experience with others.
Many such concerts, however, seem
more concerned with delivering a con
sistent product than with creating music
in the moment. For some artists, the
biggest risk is that their lip-synching will be discovered.
Flock uses novel computer vision
and real-time notation systems to delay content creation until the moment of
each performance, so that the music
can reflect the creative activities of each
show's performers and audience mem
bers. Music notation, electronic sound
and video animation are all generated in real time based on the location of
musicians, dancers and audience mem
bers as they stand up, move around and
interact with each other in accordance
with simple textual and visual instruc
tions.
Computer vision software, developed
by my collaborator Mark Godfrey, ana
lyzes images from an overhead video
camera to calculate the location data.
After pre-processing and lens distortion
correction, the software calculates an
(x, y) point for each participant, using blob detection for the audience mem
bers and dancers and a more sophis
ticated particle filter [ 1 ] to uniquely identify each saxophonist. Each par
ticipant wears a lighted hat to facilitate
efficient and reliable tracking.
My own custom software then gener ates music notation for each saxophon ist based on the location data; that
notation is sent wirelessly to a PDA
mounted on each player's instrument.
The notation (Fig. 2) sometimes dis
plays music on conventional staves but
often utilizes graphical contours, along with pitch labels, dynamics and articula
tions, to guide the musicians' improvi sation.
I employ a
variety of algorithms to
generate the notation. Sometimes,
the coordinates of each point simply
map to measure position (x) and pitch
(y). At other times, each saxophonist serves as the center of a
polar coor
dinate system, and each point within
range is mapped to a pitch (radius)
and measure position (angle). Often,
Fig. 2. Jason Freeman, four styles of real-time music notation generated by FlocKs software. (Drawing ? J. Freeman) The musician plays the darker notes; the lighter items show music played by the other saxophonists. The vertical bar shows measure position and maintains time
synchronization among the players.
F F F F Eb Eb Eb -jjf- Eb
Bb 8b Bb ^?? ^ Bb Ab - --
Ab Ab # -r^ -
Ab
Db .i.p".I.|.|
,^^r Db Db ^K Jfc 4 F&jL Ob
F cb ggn**1. dfej ''^Ifcj?"^^
f Cb f Cb::: mm m~': -?:^HF: --igF f b
44 Artists' Statements
This content downloaded from 185.2.32.106 on Sat, 14 Jun 2014 08:39:44 AMAll use subject to JSTOR Terms and Conditions