digital da vinci: computers in music

26
ix Contents 1 A Tale of Four Moguls: Interviews with Quincy Jones, Karlheinz Brandenburg, Tom Silverman, and Jay L. Cooper ���������� 1 Newton Lee 1�1 Interview with Quincy Jones ���������������������������������������������������������� 1 1�2 Interview with Karlheinz Brandenburg ������������������������������������������ 4 1�3 Interview with Tom Silverman ������������������������������������������������������� 6 1�4 Interview with Jay L� Cooper ��������������������������������������������������������� 8 References ����������������������������������������������������������������������������������������������� 11 2 Getting on the Billboard Charts: Music Production as Agile Software Development������������������������������������������������������������� 13 Newton Lee 2�1 Music Appreciation and Songwriting �������������������������������������������� 13 2�2 Relationship between Music and Computer Programming ����������� 15 2�3 Computers and Music in the Digital Age ��������������������������������������� 16 2�4 Music Production (Software Development) Life Cycle ���������������� 17 References ����������������������������������������������������������������������������������������������� 26 3 Producing and Its Effect on Vocal Recordings ������������������������������������ 29 M� Nyssim Lefford 3�1 Process and Procedures ������������������������������������������������������������������ 31 3�2 Producing ��������������������������������������������������������������������������������������� 42 3�3 Instinct and Non-Linguistic Communication ��������������������������������� 57 3�4 Features and Their Functions ��������������������������������������������������������� 67 3�5 Conclusions ������������������������������������������������������������������������������������ 72 References ����������������������������������������������������������������������������������������������� 72

Upload: newton-lee

Post on 06-Aug-2015

134 views

Category:

Education


3 download

TRANSCRIPT

ix

Contents

1 A Tale of Four Moguls: Interviews with Quincy Jones, Karlheinz Brandenburg, Tom Silverman, and Jay L. Cooper ���������� 1

Newton Lee

1�1 Interview with Quincy Jones ���������������������������������������������������������� 11�2 Interview with Karlheinz Brandenburg ������������������������������������������ 41�3 Interview with Tom Silverman ������������������������������������������������������� 61�4 Interview with Jay L� Cooper ��������������������������������������������������������� 8References ����������������������������������������������������������������������������������������������� 11

2 Getting on the Billboard Charts: Music Production as Agile Software Development ������������������������������������������������������������� 13

Newton Lee

2�1 Music Appreciation and Songwriting �������������������������������������������� 132�2 Relationship between Music and Computer Programming ����������� 152�3 Computers and Music in the Digital Age ��������������������������������������� 162�4 Music Production (Software Development) Life Cycle ���������������� 17References ����������������������������������������������������������������������������������������������� 26

3 Producing and Its Effect on Vocal Recordings ������������������������������������ 29 M� Nyssim Lefford

3�1 Process and Procedures ������������������������������������������������������������������ 313�2 Producing ��������������������������������������������������������������������������������������� 423�3 Instinct and Non-Linguistic Communication ��������������������������������� 573�4 Features and Their Functions ��������������������������������������������������������� 673�5 Conclusions ������������������������������������������������������������������������������������ 72References ����������������������������������������������������������������������������������������������� 72

x Contents

4 Mediated Interactions and Musical Expression—A Survey �������������� 79 Dennis Reidsma, Mustafa Radha and Anton Nijholt

4�1 Introduction ������������������������������������������������������������������������������������ 794�2 Users and Musical Instruments: Conceptual Framework �������������� 804�3 Interaction Modes for Musical Expression:

The Instrument as Tool ������������������������������������������������������������������� 814�4 Mediated Communication in Musical Expression:

The Instrument as Medium ������������������������������������������������������������ 844�5 Co-Creative Agents: The Instrument as Agent ������������������������������ 904�6 Discussion �������������������������������������������������������������������������������������� 93References ����������������������������������������������������������������������������������������������� 95

5 Improvising with Digital Auto-Scaffolding: How Mimi Changes and Enhances the Creative Process �������������������������������������� 99

Isaac Schankler, Elaine Chew and Alexandre R� J� François

5�1 Introduction ������������������������������������������������������������������������������������ 995�2 The Mimi System ��������������������������������������������������������������������������� 1005�3 Performance Strategies ������������������������������������������������������������������ 1065�4 Formal Structures ��������������������������������������������������������������������������� 1115�5 Mimi4x and User-Guided Improvisations ������������������������������������� 1165�6 Conclusion ������������������������������������������������������������������������������������� 123References ����������������������������������������������������������������������������������������������� 124

6 Delegating Creativity: Use of Musical Algorithms in Machine Listening and Composition ��������������������������������������������������� 127

Shlomo Dubnov and Greg Surges

6�1 Introduction ������������������������������������������������������������������������������������ 1276�2 History of Mathematical Theory of Music

and Compositional Algorithms ������������������������������������������������������ 1286�3 Generative Music in Popular Music Practices ������������������������������� 1296�4 Computer Modeling of Music �������������������������������������������������������� 1326�5 Machine Improvisation ������������������������������������������������������������������ 1346�6 Modeling of Musical Style ������������������������������������������������������������� 1356�7 Musical Semantics ������������������������������������������������������������������������� 1376�8 Computational Aesthetics �������������������������������������������������������������� 1386�9 Audio Oracle ���������������������������������������������������������������������������������� 1416�10 Music Experience of Expectation and Explanation ��������������������� 1456�11 Analyzing “Visions Fugitives” ����������������������������������������������������� 1486�12 Conclusion: The Challenge of Composing

and the Pleasure of Listening ������������������������������������������������������� 156References ����������������������������������������������������������������������������������������������� 157

xiContents

7 Machine Listening of Music ������������������������������������������������������������������ 159 Juan Pablo Bello

7�1 Introduction ������������������������������������������������������������������������������������ 1597�2 Timbre �������������������������������������������������������������������������������������������� 1607�3 Rhythm ������������������������������������������������������������������������������������������� 1667�4 Tonality ������������������������������������������������������������������������������������������ 1747�5 Conclusions ������������������������������������������������������������������������������������ 179References ����������������������������������������������������������������������������������������������� 181

8 Making Things Growl, Purr and Sing�������������������������������������������������� 185 Stephen Barrass and Tim Barrass

8�1 Introduction ������������������������������������������������������������������������������������ 1858�2 Affective Sounds ���������������������������������������������������������������������������� 1868�3 Aesthetics of Sonic Interfaces �������������������������������������������������������� 1888�4 The Sonification of Things ������������������������������������������������������������� 1898�5 Mozzi: An Embeddedable Sonic Interface to Smart Things ���������� 194References ����������������������������������������������������������������������������������������������� 202

9 EEG-Based Brain-Computer Interface for Emotional Involvement in Games Through Music ������������������������������������������������ 205

Raffaella Folgieri, Mattia G� Bergomi and Simone Castellani

9�1 Introduction ������������������������������������������������������������������������������������ 2059�2 Implicit and Explicit BCI in Games: A Short Review ������������������� 2069�3 Enhancing Gamers’ Emotional Experience

Through Music ������������������������������������������������������������������������������� 2079�4 Conclusions ������������������������������������������������������������������������������������ 233References ����������������������������������������������������������������������������������������������� 234

10 Computer and Music Pedagogy ������������������������������������������������������������ 237 Kai Ton Chau

10�1 The Quest ������������������������������������������������������������������������������������� 23710�2 Technology and the Music Classroom ����������������������������������������� 23810�3 Approach in Music Learning �������������������������������������������������������� 24110�4 Fuzzy Logic and Artificial Intelligence ��������������������������������������� 25110�5 Summary �������������������������������������������������������������������������������������� 252References ����������������������������������������������������������������������������������������������� 253

Index ���������������������������������������������������������������������������������������������������������������� 255

1

Chapter 1A Tale of Four Moguls: Interviews with Quincy Jones, Karlheinz Brandenburg, Tom Silverman, and Jay L. Cooper

Newton Lee

N. Lee (ed.), Digital Da Vinci, DOI 10.1007/978-1-4939-0536-2_1, © Springer Science+Business Media New York 2014

N. Lee ()Newton Lee Laboratories, LLC, Tujunga, CA, USAe-mail: [email protected]

School of Media, Culture & Design, Woodbury University, Burbank, CA, USAe-mail: [email protected]

MP3 and peer-to-peer file sharing technology single-handedly disrupted the age-old music business. iTunes and YouTube have displaced record stores and MTV. If we take the cue from Netflix which has successfully produced original content, it will not be long before Apple and Google will sign new artists and rival the record labels.

1.1 Interview with Quincy Jones

Quincy Jones, who composed more than 40 major motion picture and television scores, has earned international acclaim as producer of the best-selling album of all time—Michael Jackson’s Thriller—which has sold more than 110 million copies worldwide. Jones was also the producer and conductor of the charity song “We Are the World.”

The all-time most nominated Grammy artist with a total of 79 nominations and 27 wins, Jones has also received an Emmy Award, 7 Oscar nominations, the Acad-emy of Motion Picture Arts and Sciences’ Jean Hersholt Humanitarian Award, the Ahmet Ertegun Award for Lifetime Achievement, and the Grammy Legend Award. He was inducted into the Rock & Roll Hall of Fame in 2013.

On November 25, 2003, I had the honor to interview Quincy Jones at his then residence at 1100 Bel Air Place. With the assistance of my colleagues Eric Huff, Brett Hardin, and Laura Knight, we took our video cameras and lighting equipment to the three-story wood-and-brick house, checked in with the security guards, and met with a big warm welcome from Quincy Jones (see Fig. 1.1).

The following is a transcript of the video interview with Quincy Jones (Lee 2004a):

13

Chapter 2Getting on the Billboard Charts: Music Production as Agile Software Development

Newton Lee

N. Lee (ed.), Digital Da Vinci, DOI 10.1007/978-1-4939-0536-2_2, © Springer Science+Business Media New York 2014

N. Lee ()Newton Lee Laboratories, LLC, Tujunga, CA, USAe-mail: [email protected]

School of Media, Culture & Design, Woodbury University, Burbank, CA, USA e-mail: [email protected]

Computers and music are converging in a new era of digital Renaissance as more and more musicians such as will.i.am are learning how to code while an increasing number of soft-ware programmers are learning how to play music.

2.1 Music Appreciation and Songwriting

I dabbled with music composition before I learned computer programming. When I was in high school, one of my best friends, Kai Ton Chau, and I would go to the Hong Kong Arts Centre on the weekends and listened to hours of classical music. My appreciation of music grew from passive listening to active songwriting. For the high school yearbook, Chau and I decided to write a song together. Inspired by Rodgers and Hammerstein, he composed the music and I wrote the lyrics. The resulting sheet music was published in the yearbook.

Although I majored in electrical engineering and computer science during col-lege years, my songwriting hobby did not dwindle over time. On the contrary, as soon as I landed my first full-time job at AT&T Bell Laboratories, I bought a profes-sional Roland synthesizer and hooked it up to a Macintosh SE computer loaded with all the best music composition software at the time. I would write melodies and my vocal teacher Josephine Clayton would arrange the music.

As the founding president of Bell Labs’ Star Trek in the Twentieth Century Club, I produced the first-ever “Intergalactic Music Festival” to showcase international songs and cultural dances performed by fellow AT&T employees. Indeed, music abounds in the Star Trek universe: Leonard Nimoy wrote and performed the song “Maiden Wine” in the original Star Trek episode “Plato’s Stepchildren,” and he

29

Chapter 3Producing and Its Effect on Vocal Recordings

M. Nyssim Lefford

N. Lee (ed.), Digital Da Vinci, DOI 10.1007/978-1-4939-0536-2_3, © Springer Science+Business Media New York 2014

M. N. Lefford ()Luleå University of Technology, Luleå, Swedene-mail: [email protected]

This chapter investigates one of the more enigmatic aspects of record production: producing. In particular, it considers how producers influence the sonic attributes of vocal recordings and how technology intersects with the act of producing. The producer’s actions, though they remain poorly understood, have long been deemed a necessary part of music recording. Producers advise and sometimes outright direct the creation of “the sound” of a recording, meaning they shape not one particular performance or aspect it but each aspect so all the different sounds fit together. Pro-ducers work on individual parts, but with the whole always in view. This attention leads ideally to a coherent listening experience for the audience. Some parts are exceedingly ephemeral, such as the attributes of the recorded voice. Yet the voice, with its exceptional malleability, provides essential glue for piecing together coher-ent wholes.

The nature and function of these vocal attributes and even more generally the attributes that constitute “the sound” of a recording remain topics for debate and research. The importance of the producer’s contribution to their creation, however, is uncontested, and extensively documented in music history books and the sales charts of the recording industry’s trade press. Producers were present in the earli-est recording sessions (e.g. Fred Gaisburg and Walter Legge), and there is no hint of them disappearing soon. To the contrary, the role of the producer in recording is evolving and expanding. According to Frith, rock criticism reveals the indicators of change. “If producers were always implicitly involved in the rock story, their role is now being made explicit.” (Frith 2012, p. 221) Increasingly, producers are credited as the central artistic force in productions, and increasingly, musicians self iden-tify as producers and/or self-produce. For these reasons alone, producing deserves greater attention. In addition, technology is pressing the issue. New technologies enable but also encroach on existing production methods and techniques.

The producer’s expertise is not only ill defined, but it also varies greatly among practitioners; some producers being more musically literate, others more technical, some very business minded, others almost shamanistic in their approach to leading

79

Chapter 4Mediated Interactions and Musical Expression—A Survey

Dennis Reidsma, Mustafa Radha and Anton Nijholt

N. Lee (ed.), Digital Da Vinci, DOI 10.1007/978-1-4939-0536-2_4, © Springer Science+Business Media New York 2014

D. Reidsma () · M. Radha · A. NijholtHuman Media Interaction, University of Twente, PO Box 217, 7500 AE, Enschede, The Netherlandse-mail: [email protected]

M. Radhae-mail: [email protected]

A. Nijholte-mail: [email protected]

4.1 Introduction

The dawn of the information and electronics age has had a significant impact on music. Digital music creation has become a popular alternative to playing classical instruments, and in its various forms has taken a place as full-fledged class of instru-ment in its own right. Research into technological or digital instruments for musi-cal expression is a fascinating field which, among other things, tries to facilitate musicians and to improve the art of musical expression. Such instruments broaden the available forms of musical expression and provide new modes for expression, described by some as a reinvention of the musician’s proprioceptive perception (Benthien 2002; Kerckhove 1993). They can make musical expression and musical collaboration more accessible to non-musicians and/or serve as educational tools. Technology can also eliminate the boundaries of space and time in musical col-laboration or performance, or enhance it, by providing new channels of interaction between performers or between performer and audience. Furthermore, technology in itself can be a collaborating partner in the form of a creative agent, co-authoring, helping or teaching its user. Finally, Beilharz brings forward the human desire for post-humanism and cyborgism in musical expression as a goal in itself to explore mediating technologies (Beilharz 2011).

In this chapter we will survey music technology through various lenses, explor-ing the qualities of technological instruments as tools, media and agents and inves-tigating the micro-coordination processes that occur in musical collaboration, with the long range goal of creating better technological artifacts for music expression.

99

Chapter 5Improvising with Digital Auto-Scaffolding: How Mimi Changes and Enhances the Creative Process

Isaac Schankler, Elaine Chew and Alexandre R. J. François

N. Lee (ed.), Digital Da Vinci, DOI 10.1007/978-1-4939-0536-2_5, © Springer Science+Business Media New York 2014

I. Schankler ()Process Pool Music, Los Angeles, CA, USAe-mail: [email protected]

E. ChewQueen Mary University of London, London, UKe-mail: [email protected]

A. R. J. FrançoisInteractions Intelligence, London, UKe-mail: [email protected]

5.1 Introduction

This chapter examines the creative process when a human improviser or operator works in tandem with a machine improviser to create music. The discussions are situated in the context of musicians’ interactions with François’ Multimodal Interac-tion for Musical Improvisation, also known as Mimi (François et al. 2007; François 2009).

We consider the questions: What happens when machine intelligence assists, in-fluences, and constrains the creative process? In the typical improvised performance, the materials used to create that performance are thought of as tools in service of the

This chapter incorporates, in part and in modified form, material that has previously appeared in “Mimi4x: An Interactive Audio-visual Installation for High-level Structural Improvisation” (Alexandre R. J. François, Isaac Schankler and Elaine Chew, International Journal of Arts and Technology, vol. 6, no. 2, 2013), “Performer Centered Visual Feedback for Human-Machine Improvisation” (Alexandre R. J. François, Elaine Chew and Dennis Thurmond, ACM Computers in Entertainment, vol. 9, no. 3, November 2011, 13 pages), “Preparing for the Unpredictable: Identifying Successful Performance Strategies in Human-Machine Improvisation” (Isaac Schankler, Alexandre R. J. François and Elaine Chew, Proceedings of the International Symposium on Performance Science, Toronto, Canada, 24–27 August 2011), and “Emergent Formal Structures of Factor Oracle-Driven Musical Improvisations” (Isaac Schankler, Jordan L.B. Smith, Alexandre R. J. François and Elaine Chew, Proceedings of the International Conference on Mathematics and Computation in Music, Paris, France, 15–17 June 2011).

127

Chapter 6Delegating Creativity: Use of Musical Algorithms in Machine Listening and Composition

Shlomo Dubnov and Greg Surges

N. Lee (ed.), Digital Da Vinci, DOI 10.1007/978-1-4939-0536-2_6, © Springer Science+Business Media New York 2014

S. Dubnov () · G. SurgesMusic Department, University of California in San Diego, San Diego, USAe-mail: [email protected]

G. Surgese-mail: [email protected]

6.1 Introduction

In the recent years the ability of computers to characterize music by learning rules directly from musical data has led to important changes in the patterns of music marketing and consumption, and more recently also adding semantic analysis to the palette of tools available to digital music creators. Tools for automatic beat, tempo, and tonality estimation provide matching and alignment of recordings dur-ing the mixing process. Powerful signal processing algorithms can change the duration and pitch of recorded sounds as if they were synthesized notes. Genera-tive mechanisms allow randomization of clip triggers to add more variation and naturalness to what otherwise would be a repetitive, fixed loop. Moreover, ideas from contemporary academic music composition, such as Markov chains, granular synthesis and other probabilistic and algorithmic models slowly find their way in, crossing over from experimental academic practices to mainstream popular and commercial applications. Procedurally generated computer game scores such as the one composed by Brian Eno for Maxis’ 2008 “Spore” and albums such as Björk’s 2011 “Biophilia”—in which a traditional album was paired with a family of generative musical iPad “apps”—are some recent examples of this hybridiza-tion (Electronic Arts 2013).

These developments allow composers to delegate larger and larger aspects of mu-sic creation to the machines. Accommodating this trend requires developing novel approaches to music composition that allow specification of desired musical out-comes on a higher meta-creative level. With the introduction of such sophisticated software models into the process of music production, we face new challenges to our traditional understanding of music. In music research, new evidence establishes

159

Chapter 7Machine Listening of Music

Juan Pablo Bello

N. Lee (ed.), Digital Da Vinci, DOI 10.1007/978-1-4939-0536-2_7, © Springer Science+Business Media New York 2014

J. P. Bello ()Music and Audio Research Laboratory (MARL), New York University, New York, USAe-mail: [email protected]

7.1 Introduction

The analysis and recognition of sounds in complex auditory scenes is a fundamental step towards context-awareness in machines, and thus an enabling technology for applications across multiple domains including robotics, human-computer interac-tion, surveillance and bioacoustics. In the realm of music, endowing computers with listening and analytical skills can aid the organization and study of large music collections, the creation of music recommendation services and personalized radio streams, the automation of tasks in the recording studio or the development of inter-active music systems for performance and composition.

In this chapter, we survey common techniques for the automatic recognition of timbral, rhythmic and tonal information from recorded music, and for characterizing the similarities that exist between musical pieces. We explore the assumptions behind these methods and their inherent limitations, and conclude by discussing how current trends in machine learning and signal processing research can shape future developments in the field of machine listening.

7.1.1 Standard Approach

Most machine listening approaches follow a standard two-tier architecture for the analysis of audio signals. The first stage is devoted to extracting distinctive attributes, or features, of the audio signal to highlight the music information of importance to the analysis. The second stage utilizes these features either to categorize signals into one of a predefined set of classes, or to measure their (dis)similarity to others.

In the literature, the first stage typically utilizes a mix of signal processing techniques with the heuristics necessary to extract domain-specific information.

185

Chapter 8Making Things Growl, Purr and Sing

Stephen Barrass and Tim Barrass

N. Lee (ed.), Digital Da Vinci, DOI 10.1007/978-1-4939-0536-2_8, © Springer Science+Business Media New York 2014

S. Barrass ()Faculty of Arts and Design, University of Canberra, Canberra, Australiae-mail: [email protected]

T. BarrassIndependent Artist and Inventor, Melbourne, Australiae-mail: [email protected]

“ All our knowledge has its origin in the senses.”

—Leonardo da Vinci

8.1 Introduction

The seminal book, Making Things Talk, provides instructions for projects that connect physical objects to the internet, such as a pet’s bed that sends text mes-sages to a Twitter tag (Igoe 2007). Smart Things, such as LG’s recently released Wifi Washing Machine, can be remotely monitored and controlled with a browser on a Smart Phone. The Air-Quality Egg http://airqualityegg.com is connected to the Internet of Things where air-quality data can be shared and aggregated across neighborhoods, countries or the world. However, as Smart Things become more pervasive there is a problem that the interface is separate from the thing itself. In his book on the Design of Everyday Things, Donald Norman introduces the concept of the Gulf of Evaluation that describes how well an artifact supports the discovery and interpretation of its internal state (Norman 1988). A Smart Kettle that tweets the temperature of the water as it boils has a wider Gulf of Evaluation than an ordinary kettle that sings as it boils, because of the extra levels of indirection required to access and read the data on a mobile device compared to hearing the sound of the whistle.

In this chapter we research and develop interactive sonic interfaces designed to close the Gulf of Evaluation in interfaces to Smart Things. The first section de-scribes the design of sounds to provide an emotional connection with an interac-tive couch designed in response to the Experimenta House of Tomorrow exhibi-tion (Hughes et al. 2003) that asked artists to consider whether “the key to better living will be delivered through new technologies in the home”. The design of the

205

Chapter 9EEG-Based Brain-Computer Interface for Emotional Involvement in Games Through Music

Raffaella Folgieri, Mattia G. Bergomi and Simone Castellani

N. Lee, (ed.), Digital Da Vinci, DOI 10.1007/978-1-4939-0536-2_9, © Springer Science+Business Media New York 2014

R. Folgieri ()DEMM, Dipartimento di Economia, Management e Metodi quantitativi, Università degli Studi di Milano, Milano, Italye-mail: [email protected]

M. G. BergomiDipartimento di Informatica, Università degli Studi di Milano, Milano, Italy

Ircam, Université Pierre et Marie Curie, Paris, France

S. CastellaniCdL Informatica per la Comunicazione, Università degli Studi di Milano, Milano, Italy

9.1 Introduction

Several studies promote the use of Brain Computer Interface (BCI) devices for gaming. By a commercial point of view, games represent a very lucrative sector, and gamers are often early adopters of new technologies, such as BCIs. Despite the many attempts to create BCI-based game interfaces, too few BCI applications for entertainment are really effective, not due to the difficulties on the computer side in signal interpretation, but rather on the users’ side in focusing on their imagination of movements to control games’ characters.

Some BCIs, such as the Emotiv Epoc (Emotive System Inc.), easily allow the myographic interface to generate game commands, but are hardly able to correctly translate the pure cerebral signals into actions or movements. Difficulties are mainly due to the fact that BCIs are currently less accurate than other game interfaces, and require several training sessions to be used. On the contrary, music entertainment applications seem to be more effective and require a short training on BCI devices.

In section two, we present a short review on implicit and explicit use of BCI in games and of main BCI commercial models. Section three focuses on our ap-proach in enhancing gamers’ emotional experience through music: we present the results of preliminary experiments performed to evaluate our approach in detecting users’ mental states. We also present a prototype of a music entertainment tool de-veloped to allow users to consciously create music by their brainwaves. Lastly, we shortly present the state-of-the-art of our research. In section four, we present our

237

Chapter 10Computer and Music Pedagogy

Kai Ton Chau

N. Lee (ed.), Digital Da Vinci, DOI 10.1007/978-1-4939-0536-2_10, © Springer Science+Business Media New York 2014

K. T. Chau ()Kuyper College, Michigan, USAe-mail: [email protected]

10.1 The Quest

A number of years ago, a discussion between several music educators, who were also computer enthusiasts, fascinated me. The discussion was about the use of com-puters to enhance learning in basic musicianship, particularly pitch and rhythm matching—some of the fundamental music abilities for singers.

When a student is asked to match a tone (pitch) generated by a computer, how does the computer determine if the response is accurate? In the same way, when a student is asked to repeat a rhythmic pattern generated by a computer, how would the computer determine the accuracy of the response?

To the unsuspecting, the processes of the above events might seem quite straight-forward: In the former case, the computer is programmed to generate a tone, and seeks a response from the user. When the student responds by repeating the pitch, the computer will capture the sound from an input source (typically a microphone), and then analyze it (particularly, the frequency of the tone recorded). If the numeri-cal value of the frequency captured by the computer matches with that of the preset value of the underlying question, the answer would be correct, and thus the comput-er would provide a positive feedback to the student. If not, the feedback would be a negative one. In the latter case, the computer generates a series of rhythmic patterns within a defined speed (tempo) chosen by the programmer. The response from the student is again captured by a microphone, and then analyzed by the computer. If the length of each note is an exact match with the preset patterns, the answer would be correct; if not, the response is not an accurate one.

Represented logically, if a represents the numeric value of the frequency or rhythmic pattern generated by the computer, and b represents the numeric value of the frequency or rhythmic pattern captured by the computer from the response, then:

255

Index

2D-FMC See 2-dimensional Fourier Magnitude Coefficients, 179

2-dimensional Fourier Magnitude Coefficients, 179

50 Cent, 8101 Dalmatians, 142001:

A Space Odyssey, 16[xor] synth, 202β rhythm, 211β-wave, 211, 212

AA2IM See American Association of

Independent Music, 6Ableton Live, 131, 132Ableton Push, 132absolutist view, 137acoustic fingerprint, 4acoustic instrument, 52acoustic signal, 102Advanced Audio Coding, 4Advancing Interdisciplinary Research in

Singing, 72aesthetic attitude, 139aesthetic emotion, 139aesthetic experience, 139, 140, 145aesthetic judgment, 139aesthetic perception, 128, 146, 157aesthetic preference, 139aesthetics of sonification, 189aesthetic stimuli, 139affect, 48, 68affectation, 30, 48, 49, 50, 51, 52, 53, 54, 55,

56, 57, 64, 65, 66, 68, 69, 72affective expression, 89affective sound, 186Ahmet Ertegun Award, 1Air-Quality Egg, 185

N. Lee (ed.), Digital Da Vinci, DOI 10.1007/978-1-4939-0536-2, © Springer Science+Business Media New York 2014

Akkersdijk, S., 86, 88Albin, Zak J., 40Albon, S. D., 61aleatoric music, 120aleatoric procedures, 37, 131algorithmic composer, 138algorithmic composition, 38, 132, 133algorithmic composition engine, 91algorithmic improvisation system, 101algorithms, 31al-Jazarī, 36Allvin, Raynold L., 240Almighty, The, 21alpha band, 206alpha leader, 60alto clef, 241Amazon mp3, 17ambient music, 129, 130ambient sound, 130American Association of Independent

Music, 6American Idol, 9, 25amplification, 53amplifiers, 51, 52amplitude envelope, 131analysis of variance, 210andante, 252Andrade, Mark, 14Andrews, Kevin, 21Angel of Death, 138animal communication, 46, 60animal learning, 140animation, 247annotation, 87ANOVA See analysis of variance, 210Antares, 49anti-apartheid anthem, 19anticipatory paradigm, 137

Index256

API See application programming interface, 196

Apple, 1, 6application programming interface, 196appraisal structure, 139Arduino, 186, 190, 191, 192, 193, 194, 195,

196, 198, 199, 201, 202Ardweeny, 191, 192arias, 243Aristotle, 145arousal, 187, 208, 209, 211, 212, 214, 215,

217, 232arousal potential, 139artifact possessing agency, 81artifacts, 33, 49, 64, 79, 80artifice, 49artificial intelligence, 71, 91, 252artificial life, 91artificial lucidity, 123Artiphon, 17artist classification, 166artist expression, 66Ashby, Hal, 3Assayag, G., 116, 152asymmetrical spatial filter, 222atemporal music, 121attack time, 161Audacity, 70Audience Effect, 59Audio Oracle, 141, 142, 147, 148, 150, 151,

153, 155, 156, 157, 158Audio Oracle representation, 138audio signal analysis, 71audio stimulus, 211, 213, 214, 215, 225, 227,

231auditory channel, 85auditory perception, 43, 51, 56, 71Autechre, 131automated mixing, 130automatic music analysis, 160automatic recommendation, 166autonomous improvisation, 138autonomous musical machine, 128auto-tagging, 166auto-tune, 20, 49Avelar, Frank, 14Avicii, 25

BBaby Bells, 4Bach, J. S., 3, 129, 247, 248Bachorowski, J.-A, 48bag of features, 166bag of words, 166

Baldi, P. F., 145Bamberger, J., 120Band-OUT-of-the-Box, 103Bardachenko, Ievgenii, 19bass clef, 241bass note, 177Bayesian model, 140BCI-based game interface, 205, 206BCI See brain-computer interface, 205Beach Boys, The, 44beat, 172Beatles, The, 17Beat Of Magic Box, 202Beaton, B., 86, 87, 93Beilharz, K. A., 79Bell Atlantic, 4Bell Laboratories, 4, 13, 16Bello, Juan Pablo, 159Bell South, 4Bense, Max, 138Bergomi, Mattia G., 205Berlynes, Daniel, 139beta band, 206beta follower, 60Bianchi-Berthouze, N., 85Billboard, 17bioacoustics, 159bioinformatics, 160, 179biology, 30, 43, 51, 56biomechanics, 48, 51Birdwell, Rob, 15Birkhoff, George David, 138Björk, 71, 127Black Eyed Peas, The, 16blind source separation, 219Bloch, G., 152Boarduino, 190, 191BoB See Band-OUT-of-the-Box, 103BOF See bag of features, 166B.O.M.B. See Beat Of Magic Box, 202Bonnell, Pamela, 14Bono, 47, 52Boolean logic, 252Boop, Betty, 19Borchers, J., 87, 92Borys, Michael, 14Boston Court Performing Arts Center, 108Bowie, David, 54Boyle, Susan, 8brain channel, 207brain-computer interaction, 81brain-computer interface, 83, 205brain rhythm, 207brain signal, 207, 218, 219

Index 257

Brandenburg, Karlheinz, 4, 17brightness, 161Brokaw, Harold Cicada, 14Brooks, Evan, 70Brooks, Mel, 8Brown, James, 54Brown, Rob, 131Brunswik lens model, 50Bruza, Michael, 14Bryan-Kinns, N., 87BSS See blind source separation, 219Burns, Mike, 19Burns, K., 140, 145Burr, Raymond, 3Butera, Mike, 17Byrne, Mick, 201

CCage, John, 37, 108, 122, 130CAI See computer-aided instruction, 240Callisto, 14Candy, Michael, 201canon, 113, 115Carnegie Hall, 245Case, Steve, 3Castellani, Simone, 205catalog information, 134CEA See Consumer Electronics

Association, 4Čebyšev band-pass filter, 212, 218cellular automata, 133cepstral analysis, 163, 164cepstral representation, 165CGA See color graphics adapter, 241ChaChaCha, 173chance music, 37Chan, Dennis, 250channel vocoder, 162, 163chaos theory, 133Charles, Ray, 3Charlie X, 14Chau, Kai Ton, 13Chemillier, M., 152Cheney, D. L., 59, 65Cher, 20Chew, Elaine, 152Chomsky, Noam, 133choral conducting, 244chord, 174chord estimation, 179, 180chord progression, 177chord recognition, 160, 177, 180chord sequence, 92, 174, 175chord sequence estimation, 178

choreography, 92, 93chroma features, 174, 175, 176, 177, 178, 179chromagram, 175, 176, 179chromatic musical style, 128chromatic scale, 243, 246chroma vector, 175, 176Clarke, Arthur C., 16Clark, H., 60classical music, 130classical repertoire, 43Clayton, Josephine, 13Clutton-Brock, T. H., 61coaxial cable, 4Cobra, 119, 121cocktail party problem, 219co-creative agent, 90, 92coercion, 58cognition, 34, 43, 48cognitive mastering, 139, 157cognitive musicology, 128cognitive science, 33, 90, 91Coldcut, 6collaborative musical expression, 80, 95collaborative sonification, 83Collins, Karen, 132color graphics adapter, 241common chords, 239communication, 30, 44, 45, 46, 48, 50, 51, 53,

56, 57, 59, 60, 64, 65, 66, 72, 100communication failure, 64compressed complexity, 138compression algorithm, 140, 143, 144Compression Rate, 142, 144Compror algorithm, 143computational creativity, 94computer-aided instruction, 240, 252computer music composition, 34computer programming, 13computer-synthesized voice, 16computer vision, 160conceptual art, 130concertos, 243Conference on New Interfaces for Musical

Expression, 193configurable interface, 84consonance, 128constant Q transform, 223, 224Consumer Electronics Association, 4contextual significance, 174continuation, 91Continuator, 91, 103contrast pattern, 66Cooper, Jay L., 8, 17Cope, David, 39, 133

Index258

cover-song identification, 178Cowell, Henry, 121creativity, 100, 123Cretan music, 174Crick, C., 90Crochemore, M., 136Crosby, Bing, 51Crow, Sheryl, 8Csound, 71, 195, 200cue-configuration, 48

DDaisy Bell (Bicycle Built for Two), 16Daisyphone, 87Dammers, Jerry, 19dance, 83, 89, 93Dannenberg, Roger, 70d’Arezzo, Guido, 128da Vinci, Leonardo, vii, 185Dawkins, Richard, 58DAWs See digital audio workstations, 131DCT See discrete cosine transform, 165deception, 62Dedman, Gary, 21deep learning, 133degrees of freedom, 36, 39, 42, 48de Leon, Dan DJ, 26delta band, 206deus ex machina, 145diaphragm, 51diatonic scale, 128Dibben, N., 64digital audio workstations, 131digital media, 123, 239digital media artifacts, 32digital production environment, 70digital rights management, 6digital signal processing, 218digital signal processors, 31dimensions of emotion, 208DiscoVision, 3discrete cosine transform, 165discrete Fourier transform, 163, 171, 208, 223DisneyBlast.com, 14Disney.com, 14Disney Online, 14distance music education, 250, 251DMG, 21Doctor Dolittle, 45dodecaphonic, 129Donnelly, Jason, 21dopamine neurons, 128downsampling, 163Dre, Dr., 8

Driscoll, S., 90, 103DSP See digital signal processing, 218Dubnov, Shlomo, 17, 153dubstep, 21Dufay, Guillaume, 128duo performance, 154dynamic structure, 50dynamic texture model, 133DYNUFAM index, 140

Eearmarks, 39EarMaster, 252ECG See electrocardiogram, 223economic utility theory, 140EEG, 83, 205, 206, 208, 209, 210, 211, 218,

219, 222, 223, 227, 231, 232, 233Eigenfeldt, Arne, 91, 93Eisenberg, Evan, 56electrocardiogram, 223Electroencephalography See EEG, 205electronic dance music, 130electronic keyboard, 239, 242, 249electronic metronome, 243electronic music, 122, 130electronic persona, 56e-licktronic, 202emo-tagging, 233emotion, 46, 47, 48, 50, 58, 67Emotiv Epoc, 205, 206, 207, 208, 209, 210,

211, 232, 233emulation, 69Eno, Brian, 47, 71, 127, 130entropy, 135, 138, 140, 142, 143, 146, 147,

151environment, 59environmental sound, 130, 160, 166equalizers, 52Erler, Sven, 21Euclidean distance, 179, 215, 227Evans, C. S., 59EVE’, 140, 145, 146exaggeration, 66, 67Experimenta House of Tomorrow, 185, 186,

187expert system, 133explicit classification, 139expressive gestures, 34

FFacebook, 9, 23Factor Oracle, 135, 136, 141, 152faders, 35Fakebit Polytechnic, 202

Index 259

false note, 132Fanning, Sean, 4Farrell, Kelly, 26Faust, 71feature extraction, 160, 179, 180FeelSound, 81Fender, Leo, 3fiber optic, 4fidelity, 49, 56field recording, 130Flood, Daniel, 201Florence Cathedral, 129Flotsam, 190, 191, 192, 193, 194Flurry, Henry, 14FM spectrum, 191Folgieri, Raffaella, 205foot pedal, 154foreign language, 245Formalized Music, 132Fox, Liza, 19, 21, 26fractals, 133François, Alexandre R. J., 152Franklin, Carl, 15Fraunhofer Institute, 4Freeman, Clinton, 201frequency domain, 161Frith, Simon, 55Frith, W. T., 66furniture music, 130fuzzy logic, 252

GGabriel, Peter, 40, 71Gaga, Lady, 8Gaisburg, Fred, 29game, 47, 132, 205game command, 205game music, 207, 232gamer, 205game sound, 207Gann, Kyle, 121Gardner, Daniel, 21Gates, Bill, 3Gaussian distribution, 166General Layer, 134generative instrument, 130Generative Music Lab, 131generative procedure, 130, 131, 132Generative Theory of Tonal Harmony, 38genetic algorithm, 133genre classification, 166Georges, Cathy, 14Gerhäuser, Heinz, 4gesture interface, 82, 83, 84

gesture parametrization, 84gesture sonification, 83, 94Gilbert, Jesse, 14Gilmore, Steve, 26Girty, Gwen, 14Github, 201Glass, Philip, 38glissandi, 167Goina, M., 83Gonzalez, L.C., 19Goodall, Jane, 58Google, 1Google Ears, 17Google Play, 17Gotcher, Peter, 70Gräf, Albert, 71Grafen, A., 61Grammy Legend Award, 1grand stave, 241granular synthesis, 127Greenberg Traurig LLP, 8Green, Dave, 202Grixis, 21groove, 173Grove, Dick, 15gStrings, 246GTTH See Generative Theory of Tonal

Harmony, 38Guetta, David, 25Guitar Hero, 84Gulf of Evaluation, 185

HHaile/Pow, 103HAL 9000, 16Hammond, Kenneth R., 50Hancock, Herbie, 3, 172Hanning window, 208Hannu, Håkan, 21haptic interface, 81haptics, 81, 94haptic sensation, 192haptic vibration, 191, 192Hardin, Brett, 1Hargreaves, D., 48Harmonic Progression, 91, 93harmonics, 176harmonic similarity, 178harmonic transition, 110harmonization, 111harmony creation, 82harmony perception, 91Haris, Tony, 19Harper, David, 58

Index260

Hausdorff distance, 215Hauser, M. D., 59Hauxwell, Caroline, 201hedonomics, 186Hernández, A., 120hidden Markov model, 133, 177Hierarchial Sequential Memory for Music, 91hierarchical memory theory, 91Higgins, W., 242Hiller, Lejaren, 38, 132histogram, 166Hoffmann, G., 91Hong Kong Arts Centre, 13hook, 53, 54Hoxton Whores, 21Huff, Eric, 1, 14human-computer interaction, 159human-machine improvisation system, 100human voice, 238, 239Human Workplace, 16Hunchback of Notre Dame, The, 14Huron, D., 137, 145Hyperscore, 120

IIADS See International Affective Digitized

Sounds, 208IAPS See International Affective Picture

System, 212IBM 704, 16ICA See Independent Component Analysis,

218IEEE 1599, 157, Imaginative-Tension-Predictive-Reactive-

Appraisal, 145imitation, 36implicit memory integration, 139improvisational spontaneity, 123ImprovisationBuilder, 103improvised performance, 99Incremental Parsing, 135Independent Component Analysis, 218, 219Information Dynamics, 141, 153, 157Information Rate, 137, 138, 142, 143, 150,

158instant peer communication, 250instrumental fragment, 130instrumental improviser, 155instrumentalists, 15instrument identification, 180instrument recognition, 166Interactive Affect Design Diagram, 187interactive communication, 85interactive couch, 185

Interactive Entertainment BAFTA for Technical Innovation, 130

interactive improvisation, 138interactive improvisation system, 102, 107interactive music system, 100, 159interactive sonification, 186interactive sound, 186International Affective Digitized Sounds, 208,

212International Affective Picture System, 212Internet, 4, 9, 244, 250, 251, 253Internet of Things, 185interpolation, 109inversion, 129Invisible Studios, The, 20IOSONO, 5iPad, 7, 127, 243iPhone, 17iPod, 7iPod Touch, 17IP See incremental parsing, 135IR See Information Rate, 137Isaacson, Leonard, 132isorhythmic technique, 128ITPRA See Imaginative-Tension-Predictive-

Reactive-Appraisal, 145Itti, L., 145iTunes, 1, 17, 23Iyer, Vijay, 40

JJackendoff, Ray S., 38, 41, 63Jackson, Michael, 1Jagger, Mick, 57JayB, 19Java, 229Jean Hersholt Humanitarian Award, 1Jetsam, 190, 191, 192, 193, 194Jones, Craig, 26Jones, Quincy, 1, 17Juslin, P. N., 50

KKane, Helen, 19karaoke, 244Kasem, Casey, 17Kay, Alan, 3Kazaa, 6Keil, Charles, 40Kelly, Jr. John L., 16Kersten, Stefan, 71key signatures, 242kinesthetic learner, 245, 247Kinetic engine, 91

Index 261

King, Jr. Martin Luther, 19Kivy, P., 67Klaar, Pia, 23Klaassen, Sjors, 19Knight, Laura, 1knowledge engineering, 133Koan, 130Kolmogorov Dynamics, 143Korg nanoKONTROL, 116Kramer, Lawrence, 44, 63Krata, Ira, 244Krebs, John R., 58Kuhn, Wolfgang E., 240Kundera, Milan, 129

LLabson, Erick, 21Lagan, 190, 192, 193, 194language games, 64language of time, 166Lanois, Daniel, 61latent parameter, 145Latifah, Queen, 6Laukka, P., 50learning algorithm, 136LeBeau, Brad, 26Leder, H., 139Lee, Inessa, 7Lee, Joey, 11Lee, Newton, 2, 14Lee, Phil, 23Lefford, M. Nyssim, 18Legge, Walter, 29Lempel-Ziv, 135Lerdahl, Fred, 38, 41, 63Letterman, David, 9Levey, Robin, 14Lewis, George, 101, 123, 134Libera Awards, 6Library of Congress, 16life science, 91linguistics, 43Linn 9000, 4Lion King Animated Storybook, The, 14listening process, 138, 146Logic Layer, 134Long Term Memory, 91looping, 130lossy compression, 135Lucier, Alvin, 83Luleå University of Technology, 72lyrics, 44, 46, 47, 50, 52, 54, 55, 63LZ See Lempel-Ziv, 135

Mmachine clock, 154machine improvisation, 100, 101, 102, 105,

106, 107, 134, 136, 138, 141, 152, 157machine learning, 91, 128, 133, 159, 160, 177,

178, 180machine listening, 159, 160, 164, 166, 174,

179, 180Machinima, 23Macintosh SE, 13magnetic tape, 239MahaDeviBot, 89, 91Mahalanobis distance, 222Main, Katie, 14Malinowski, Stephen, 247mand, 62Mandela, Nelson, 19manipulator, 58, 61, 66Mappe per Affetti Erranti, 83, 89Marimba robot, 89Markov chain, 127, 131, 133, 137Marler, P., 45, 59Martin, George, 17masterclass, 251mastering, 21mathematics, 16, 32Mathews, Max, 16Matthias, Dave DJ, 26Maxis, 127Max/MSP, 71, 102, 132, 153, 155Maynard-Smith, John, 58Mazzoni, Dominic, 70MCA, 3McCarthy, John, 252McCartney, James, 71McComb, K. T., 58McDermott, J., 90Media Development Authority, 11Mediated Communication, 84Mel-Frequency Cepstral Coefficients, 144,

164melodic cycle, 128melodic patterns, 50melodic riff, 53Melody-Morph, 84Messiaen, Olivier, 115metadata, 5metronome, 243Meyer, L. B., 137Meyer, Leonard B., 38MFCCs See Mel-Frequency Cepstral

Coefficients, 144Michigan Leading Edge Technology Award,

14

Index262

micro-coordination, 79, 80, 85, 95microphone, 51, 52, 54, 219, 237, 245MIDI, 82, 101, 102, 105, 115, 116, 117, 132,

134, 135, 152, 154, 160, 178, 202, 240, 242, 245, 249, 253

Migamo, 132Mignone, Thomas, 23Miko, Hatsune, 56Miller, K., 84Million Song Dataset, 179, 180Mimi, 152, 153Mimi4x, 100, 116, 117, 118, 119, 120, 121,

122, 123mind reader, 60, 61, 62minimalism, 130Minuet, 129Miracles, The, 50MIREX See Music Information Retrieval

Evaluation eXchange, 180MIROR project, 84, 93mixing, 20, 82, 127mixing console automation, 35, 49MMO See Music Minus One, 244mobile device, 193mobile instrument, 89monologue communication, 89monophonic texture, 246Monte Carlo Markov chains, 91Montgomery, Monk, 3mood estimation, 180MoodMixer, 83Moorefield, Virgil, 39, 44, 54Moore, Scottie, 56Motown, 3, 50Mozart, 37, 129Mozzi, 186, 194, 195, 196, 197, 198, 199,

201, 202MP3 Surround, 6MP3 See MPEG-1 Layer 3, 1MPEG-1 Layer 3, 4MPEG-2, 4MTV, 1multidimensional directed information, 223multi-instrumental ensemble, 167Multimodal Interaction for Musical

Improvisation, 99Multimodal Music Interaction, 81Munoz, E. E., 89musical automatons, 37musical collaboration, 79, 84, 85, 93, 94musical creation process, 100musical data, 127, 140musical expectation, 137, 140musical experience, 130, 137

musical expression, 79, 80, 81, 82, 83, 84, 85, 86, 87, 89, 92, 94, 95

musical fruit fly experiment, 201musical gesture, 59, 62, 68musical instrument, 80Musical Instrument Digital Interface See

MIDI, 249musical interaction, 80, 82, 87musical interpretations, 38, 51musical language, 129, 241, 245musical meta-creation, 91musical notation, 34, 241, 249musical pattern, 33, 34, 35, 38, 46, 60, 70musical performance, 34, 35, 36, 37, 40, 48,

66, 68, 71, 84musical phrase, 93musical score, 138, 153musical signal, 128, 157musical sonification, 189musical structure, 128, 130, 133, 134, 137,

138, 141, 145, 149, 150, 154, 157Music Animation Machine, 247, 248music classification, 180music classroom, 238music composition, 13, 38, 41, 71, 82, 205,

207, 239, 244MusicDb, 91music education technology, 240music educator, 237, 238, 239music entertainment tool, 205music evaluation, 139Music for Solo Performer, 83musicianship, 178music improvisation, 135music informatics, 160, 166Music Information Dynamics, 128, 142, 156Music Information Retrieval Evaluation

eXchange, 180music listening, 63Music Minus One, 244musicological analyses, 134musicology, 43music pedagogy, 238, 240, 249, 251, 252music perception, 90, 95, 128, 137, 157music producer, 18, 29, 53music production, 17, 18, 33, 67, 205, 207music recommendation services, 159music recording, 29, 30, 35, 42music retrieval, 133music rudiments, 240, 242music similarity, 174, 180music stream, 116, 118, 122music technology, 79, 80, 90, 238music video, 19

Index 263

Music Week, 17Musikalisches Wurfelspiel, 129musique d’amebulement, 130musiXmatch, 17mutual engagement, 87mutual modifiability, 87myographic interface, 205myographic signal, 207MySpace, 9, 23

NNakanishi, Yoshihito, 202nanotechnology, 3Napster, 4Narmour, Eugene, 137National Academy of Recording Arts and

Sciences Heroes Award, 6National University of Singapore, 11natural listening environments, 53natural selection, 59, 60Netflix, 1neural network, 94, 133, 178, 223neurophysiology, 128neuroscience, 90, 95, 128Neurosky MindWave, 206, 207, 208, 209, 210,

211, 232, 233Newman, Allyson, 18New Music Seminar, 6New York City, 245New York University, 159, 240Ngee Ann Polytechnic, 11Ng, Kenneth, 14Nimoy, Leonard, 13Noatikl, 130, 131noisy signals, 47, 49non-musician, 79, 80, 81, 84, 93non-negative matrix factorization, 180normalize levels, 32Norman, Donald, 185North, C., 48Northwestern University, 238Notational, Performance and Audio Layers,

134novelty detection, 166, 169novelty function, 167, 170, 171, 172Nuper Rosarum Flores, 129Nyxl, 21

Ooff-pitch, 238Oh, Mina, 14OMax, 102, 103, 134, 152, 153on ground, 250online music education, 251

onset, 166, 170onset detection function, 167onset pattern, 174onset strength signal, 167OpenMusic, 102OpenMusic/Max, 152OpenSound Control, 102opportunistic overlay improvisation, 92Oracle visualization, 105, 107orbitofrontal cortex, 128orchestration, 239organized duration, 166Orlarey, Yann, 71oscilloscope, 245, 246OsciPrime, 246Osgood, Charles E., 62Owren, M.J., 48Oxenham, A. J., 90

PPachet, F., 103Page, David, 3Paine, G., 89Pape, Dave, 202Paramount Recording Studios, 20participatory skills, 43Pasquier, Philippe, 93passing melody, 177Patcher, 71pattern matching, 160Peermusic, 20peer pressure, 61peer-to-peer file sharing, 1Peeters, Geoffroy, 173Peloušek, Václav, 202pentatonic scale, 128People Inside Electronics, 108, 111perception, 30, 32, 38, 43, 47, 50, 59, 66, 67perceptual analyses, 139percussive sound, 177performance visualization, 103, 107Performing Information, 153Perry, Katy, 8personalized radio streams, 159Personal Orchestra, 92personnages rythmiques, 115Petri Nets, 133, 134Phillips, Sam, 56phrase structure, 166physiological psychology, 137physiology, 46, 51, 52, 67piano roll notation, 103piano rolls, 9PICAXE, 187, 189

Index264

Picchiotti, Mark, 21Pierce, John, 16piezo buzzer, 193piracy, 9pitch, 174pitch change, 131pitch detection, 91pitch structure, 166Pittman, Carrie, 14Plank, 81Plato’s Stepchildren, 13playlisting, 166PLP See predicted local pulse, 172Pocahontas, 14Polotti, P., 83polymath, vii, 36polyphonic ensemble, 167polyphonic texture, 113, 246polyphony, 135Polyplayground, 132Pomona College, 108popular music, 35, 42, 43, 44, 47, 53, 56, 129,

130predicted local pulse, 172predictive model, 133Presley, Elvis, 56probabilistic functionalism, 50probabilistic model, 103Probabilistic Suffix Tree, 135procedural literacy, 32procedural representations, 34Processing, 229processing algorithm, 71programming language, 71Prokofiev, Sergei, 144, 147, 148ProTools, 70prototypicality, 139PSO See Public Sound Objects, 86PST See Probabilistic Suffix Tree, 135psychoacoustic phenomena, 130psychology, 43, 48Public Sound Objects, 86Puckette, Miller, 71Puckette, M. S., 102pulse train algorithm, 192Pulse-Width Modulation, 192, 196Pure Data, 71, 153, 195Puzzle, DJ, 21PWM See Pulse-Width Modulation, 192PyOracle, 135, 153, 154, 155, 156Pythagoras, 128Pythagorean tuning, 128Python, 155, 196

QQ-transform, 212quantize performances, 32quartets, 243Queensland University of Technology, 202query-by-example, 178Query by Humming, 6

RRamachandran, V. S., 66Rare, Joe, 21RCA-Victor, 3reacTable, 81real sonification, 84Recording Industry Association of America, 6recording session, 20, 29Recurrence Plot, 223, 227RED camera, 26Reddy, V., 61redundancy, 63referentialist view, 137Reich, Steve, 38Reidsma, Dennis, 87remixes, 21Renaissance, vii, 13, 17Requiem For Methuselah, 14retrograde, 129reverberation, 33, 52, 54reversal-recognition, 145reverse metronome calculation, 243Reyes, Jonathan, 19Reynolds, Roger, 138rhythm, 160, 166, 169, 170, 171, 173, 174rhythm creation, 82rhythmic activity, 121rhythmic cells, 115rhythmic cycle, 128rhythmic patterns, 50rhythmic phrase-matching improvisation, 92RIAA See Recording Industry Association of

America, 6Richie, Lionel, 3Riley, Terry, 38, 120, 121RMFH, DJ, 19Robinson, Smokey, 50robotic drummer, 103robotic music-making, 90robotic performer, 86robotics, 159Rock & Roll, 3Rock & Roll Hall of Fame, 1Rodgers and Hammerstein, 13Rojas, Heidi, 19

Index 265

Romanticism, 129rondo, 114, 115Rosario, Ralphi, 21Rubin, Rick, 49Rudolph, Ilonka, 21rule-based system, 123Rumba, 173RuPaul, 6Russian Arm, 26Ruthmann, Alex, 240Ryan, Liz, 16

SSAI See Software Architecture for

Immersipresence, 100sampler, 71Sänger, J., 83Satie, Erik, 130Saturday Night Live, 9Schankler, Isaac, 108, 111schizophrenic performance, 84Schoenberg, Arnold, 129Schubert, Franz, 244Scriabin, Alexander, 103seagull, 66sea-wave sound, 192Second Life, 23second voice, 113Seinfeld, Jerry, 8Seitzer, Dieter, 4semantic analysis, 127semantic meaning, 44, 47, 50serialism, 129Serrie, Keir, 14Seyfarth, R. M., 59, 65Sharman, James, 21Shawl, David, 23Shazam, 17sheet music, 13, 38Shimon, 86, 91, 92, 93Short Term Memory, 91short-time Fourier transform, 176side-chaining, 131signal energy, 167signaling, 57, 58, 59, 65signal processing, 52, 53, 54, 55, 70, 71, 127,

130, 159, 174signal processing algorithm, 127signal processors, 49, 51signal segmentation, 167signature, 39, 42, 51, 54, 68, 133signature patterns, 39Silverman, Tom, 6, 17

Sinatra, Frank, 51sine wave, 189, 196, 197, 199, 201sine wave sonification, 189Sinfonia, 252singer, 19, 44, 46, 47, 51, 52, 54, 55, 56, 57,

58, 60, 61, 64, 65, 66, 67, 72, 238sinusoid, 170, 171, 172Sîrbu, Ana, 19Sîrbu, Radu, 19Skinner, B. F., 62Smart Board, 250SmartMusic, 244, 252smart phone, 185, 186, 188, 189, 190Smart Thing, 185, 186, 189, 190, 191, 192,

193social activity, 43, 60, 65social awareness, 90, 92, 95social behavior, 92, 95social bond, 65social cohesion, 64, 69social construct, 65social interaction, 43, 46, 51, 56, 61, 65, 72social media, 240social musical collaborator, 92Social Sciences and Humanities Research

Council of Canada, 72soft synths, 71Software Architecture for Immersipresence,

100software synthesizes, 71solar panel, 186solo instrument, 239solo performance, 154SoMax, 134, 152songfulness, 44Song Reader, 38songwriter, 18, 19, 55, 64sonic features, 41, 43, 47, 62, 68sonic feedback, 186, 193sonic interface, 185sound art, 130sound engineer, 20, 31SoundExchange, 6SoundHound, 17sound libraries, 245Sound Meter, 246, 247Soundpainting, 119sound quality, 32, 39, 41, 49, 72sound quality assessment, 41Soundscape, 130sound source, 33, 56sound synthesis, 34, 36Sparkfun, 190

Index266

sparse coding, 179, 180spatial dislocation, 87Spatial Dislocation, 86Spector, Phil, 41spectral centroid, 161, 162spectral envelope, 161, 162, 163, 164, 165spectral flatness, 161SpectralGL, 15spectral spread, 161spectrograph manipulation, 82speech, 45, 46, 61, 89, 160, 161, 166, 177speech interface, 188Spheeris, Penelope, 25spontaneity, 123spontaneous exclamations, 54Spotify, 23Squeak, 2Staalhemel, 83Standuino, 202Star Trek, 13Star Trek in the 20th Century Club, 13static harmony, 130statistical model, 133, 157, statistical music modeling, 128statistical regularities, 160Stewart, Thomas R., 50STFT See short-time Fourier transform, 176stochastic procedure, 132stochastic process, 103stochastic system, 123straight tone, 238Stravinsky, Igor, 115, 137striatum, 128string quartet, 132structural disturbance, 63structural improvisation, 119, 123Structural Layer, 134subtractive synthesizer, 161suicide note, 62Supercollider, 71super stimulus, 66Surges, Greg, 17, 153surprisal, 145surround sound, 53, 69surveillance, 159Swanson, Stephanie, 26Sweatsonics, 189swing, 173Swingle, Elizabeth, 14symbolic chord sequence, 179symphonic orchestra, 240, 243synchronicity, 161synchronization, 86synthesizer, 186, 189, 193, 202, 249synthetic harmony, 131synthetic instrument, 71

Ttactile interface, 81Takahashi, M., 90talea, 128tangible interface, 84, 94Tarumi, H., 90Tatar, D., 80techno, 21, 131teleological component, 122television, 9, 140Temple, Shirley, 57Temporal Dislocation, 87temporal ordering, 166temporal structure, 166temporal variation, 161ten-hand piano, 86tenor clef, 241theremin, 83, 189theta band, 206, 211Thom, B., 103Thompson, Walter, 119Thompson, W. F., 89thresholding, 170Thurmond, Dennis, 100, 105, 106Tiesto, 25timbral improvisation, 153timbre, 34, 44, 46, 51, 52, 54, 55, 56, 58, 160,

161, 174, 177, 179time-domain signal, 163, 167Tinbergen, N., 66Toast, Michael DJ, 26Toft, R., 52Tommy Boy Records, 6tonal analysis, 174tonal harmonic language, 130tonality, 174tone-deaf, 245tone transition, 111totalism, 121touch-interface, 94touchscreen interface, 188T-Pain, 20tracking, 20transient, 167transient noise, 170, 177Transition Network, 133treble clef, 241tremolo, 167Trevarthen, C., 64Tudor, David, 122tuneblocks, 120Tuomenoksa, Mark, 14Turkish music, 174twelve-tone, 129twisted pair, 4Twitter, 185

Index 267

UU2, 47uncompressed complexity, 138Universal Mastering Studios, 21Universal Serial Bus See USB, 249Università degli Studi di Milano, 205University of California in San Diego, 127University of Illinois, 132USB, 249user experience, 207user interface, 86, 242Ussachevsky Memorial Festival, 108U.S.S. Enterprise, 14

Vvalence, 187, 208, 209, 210, 211, 212, 214,

215, 217, 225, 227, 233Variable Markov Models, 135Varni, G., 86, 89, 90, 92verbal channel, 85, 93Vercoe, Barry, 71Verplank, Bill, 80, 81VGA See video graphics array, 241vibrato, 167, 238video game, 132video graphics array, 241Viewpoints Research Institute, 11vinyl record, 239virtual conductor, 83, 87, 88, 93virtual control surfaces, 36virtual improvisor, 134virtual musician, 94virtual orchestra, 83, 88, 92virtuosity, 89, 93Visconti, Tony, 54Visions Fugitives, 144, 147, 148, 150, 151visual channel, 85visual feedback, 86visual interface, 100, 102, 103, 105, 106, 107,

116visual learner, 245visual stimulus, 211, 212, 214, 215, 234VMM See Variable Markov Model, 135vocal communication, 46, 57, 68vocal expression, 46, 48vocal performance, 30, 44, 46, 49, 55, 57, 64,

65, 67, 69vocal persona, 55, 57, 65, 72vocal recording, 20, 29voice shaping technologies, 71Voice, The, 9Von Mallasz, Sàndor, 26

Voyager, 101, 103, 123, 134vX.0, 202

WWalker, Evelyn G., 62Walker, W. F., 103Ward, K. P. N., 83Waterhouse, Matt, 26Watson Fellowship, 15Wave Field Synthesis, 5Wayfaring Swarms installation, 83Way to Eden, The, 14Webber, Peter R., 250webcast, 9webinar, 250Webster, Peter R., 238, 240Weinberg, G., 90, 91, 103Western historic music, 246Western music, 130, 138Western musical styles, 41Western music notation, 34Western pop music, 178Western tonal harmony, 34, 38, 43Western tonal music, 174whispers, 53White, Jack, 55Wifi Washing Machine, 185Wiley, R., 62Will, DJ, 26will.i.am, 13, 16Wilson, Brian, 44Winehouse, Amy, 64Winnie the Pooh and the Honey Tree, 14Wittgenstein, Ludwig, 47, 64W., Kristine, 6WoMax, 102Woodbury University, 15Wukovitz, Stephanie, 14Wyar, Bob, 14

XX, Princess, 19, 25Xenakis, Iannis, 129, 130, 132

YYoung, La Monte, 38YouTube, 1, 9, 23, 120, 253Yuldashev, Bakhodir, 23

ZZiZi, 186, 187Zorn, John, 119, 121