the algotar project

77
THE ALGOTAR PROJECT: AN ALGORITHMIC PERFORMANCE TOOL FOR HEXAPHONIC GUITAR Author Brian Tuohy 0743518 Supervisor Giuseppe Torre BSc. Music, Media and Performance Technology Dept. of Computer Science & Information Systems University of Limerick Ireland Submitted 18 th April 2011

Upload: brian-tuohy

Post on 14-Dec-2014

103 views

Category:

Documents


0 download

DESCRIPTION

This is my Final Year Thesis from my BSc. in Music, Media & Performance Technology.The Algotar Project attempts to create a link between the areas of guitar-based popular music andcomputer-aided algorithmic composition. This is achieved by the use of a guitar-mounted iPodtouch as a wireless control interface for an algorithmic composition application. The iPod acts as amediator between the differing musical environments concerned. By integrating the device into theguitarist’s performance, and taking advantage of techniques inherent in the existing playing style, itachieves a non-intrusive extension to the guitar’s system.The audio processing software concerned seeks to use the live signal from the guitar as the sourcefor algorithmic music – creating a compromise between the concepts of determinacy associatedwith traditional Western music, and computer generated unpredictability related to certain areas ofcomputer music.

TRANSCRIPT

Page 1: The Algotar Project

THE ALGOTAR PROJECT: AN ALGORITHMIC PERFORMANCE TOOL FOR

HEXAPHONIC GUITAR

Author

Brian Tuohy 0743518

Supervisor

Giuseppe Torre

BSc. Music, Media and Performance Technology Dept. of Computer Science & Information Systems

University of Limerick Ireland

Submitted 18th April 2011

Page 2: The Algotar Project

ii

Acknowledgements The author would like to thank:

Giuseppe Torre, my supervisor, for guiding me through the project, and pushing me through lulls in

having any idea what’s going on.

The lecturers and staff of the CSIS department, who have, over the past four years, exposed me to a

wider world of art and technology than I had imagined to be possible. Because of you, I have

developed an enthusiasm for such a broad range of topics that I am now at a loss for what I want to

be when I grow up.

My MMPT colleagues who have gone through the course with me, and those that set the bar ahead

of us. You have kept me sane over the past four years. The level of talent throughout the group is an

inspiration. I look forward to many more years of collaboration.

My family, who have supported me unconditionally throughout my time at University – even if

they had no idea what any of the abbreviations I kept mentioning stood for.

The angel that wakes me at three in the morning to catch a flight; sits through late nights of max

patching and ranting about the inherent follies of the system; and still manages to bring me coffee

and offer words of encouragement. You never cease to amaze me.

Page 3: The Algotar Project

iii

Table of Contents. Acknowledgements ................................................................................................. ii Table of Contents ................................................................................................. iii Table of Figures ................................................................................................... iv Project Summary ................................................................................................. 1 1. Introduction and Objectives ........................................................................... 2

1.1 General Introduction ................................................................................... 2 1.2 Background and Motivation ....................................................................... 2 1.3 Proposed objectives .................................................................................... 3 1.4 Requirements to achieve these Objectives ................................................. 5

2. Project Justification ........................................................................................ 7 2.1 Why use iPod Touch? ................................................................................ 7 2.2 Why use Guitar? ......................................................................................... 8 2.3 Why create generative music? .................................................................... 9

3. Research ........................................................................................................... 10 3.1 Introduction ................................................................................................ 10 3.2 Previous Work ............................................................................................ 10 3.2.1a Guitar Augmentation: iTouch 2.0 …..…………................................ 10 3.2.1b Guitar Augmentation: Hexaphonic Guitar …..………....................... 11 3.2.2a Audio Processing: Max/MSP Hexaphonic Toolbox ...…………….… 12 3.2.2b Audio Processing: Spectral Processing in Max/MSP …………......… 12 3.2.3a Algorithmic Composition: EMI ……………....................................... 13 3.2.3b Algorithmic Composition: Bloom ……………................................... 14 3.3 Research Undertaken .................................................................................... 14 3.3.1 Guitar Augmentation ............................................................................. 14 3.3.2 Networking Software ............................................................................ 18 3.3.3 Algorithmic Composition ...................................................................... 19

4. Networked Control – TouchOSC Development............................................. 22 4.1 Functions of application............................................................................... 22 4.2 Interface Design........................................................................................... 23 4.3 Communicating with Max/MSP………...................................................... 25

5. Hexaphonic Implementation............................................................................ 28 5.1 Hexaphonic Basics and Requirements ........................................................ 28 5.2 Inputting audio, and Issues ........................................................................... 30

6. Input Analysis.................................................................................................... 32 6.1 Basic Principles ........................................................................................... 32 6.2 Note Tracking .............................................................................................. 33 6.3 Technique Tracking...................................................................................... 35 6.4 Audio Sampling............................................................................................ 37

7. Compositional Rules & Algorithms................................................................. 40 7.1 Documenting Style ....................................................................................... 40 7.2 Comparison ................................................................................................... 43 7.3 Generation ................................................................................................... 45

8. Performance Controls...................................................................................... 47 9. Conclusions and Further Work...................................................................... 52 Index of Appendices............................................................................................... 54

Appendix 1 – Networking Applications Screenshots……………………... 55

Page 4: The Algotar Project

iv

Appendix 2 – Bloom Screenshot …………………………………………. 58 Appendix 3 – Important Max/MSP Objects Used …………..……………. 59 Appendix 4 – Important Max Subpatches created …………..……………. 61 Appendix 5 – Hierarchy of Max Patches …………………………………. 63 Appendix 6 – Necessary Max Objects …………………………………… 65 Appendix 7 – System Screenshots ………………………….…….………. 66

References ............................................................................................................... 69 Bibliography ........................................................................................................... 71

Table of Figures Figure 1 – iPod Touch.…………………………………………………………... 8 Figure 2 – Brian Green's iTouch 2.0…………………………………………….. 11 Figure 3 – Nodal Screenshot…………………………………………………….. 21 Figure 4 – TouchOSC interface (Page 1)……………………………………….... 25 Figure 5 – [OSC-route] Object in Max/MSP …………………………………..... 26 Figure 6 – Routing accelerometer data in Max/MSP……………...…………..… 27 Figure 7 – Divided pickup …………………………………………………...….. 28 Figure 8 – 13 Pin Signal from divided pickup ……………………………..……. 29 Figure 9 – Breakout Box………………………………………………………..... 30 Figure 10 – Max/MSP Note & Amplitude Display ………………………………. 34 Figure 11 – iPod touch mounted on guitar with Dunlop D65 ……………………. 45 Figure 12 – Max/MSP [FX] patch ………………………………………………... 49 Figure 13 – XY Pad on page three of iPod interface ……………………………... 50 Figure 14 – Page 2 of iPod interface.……………………………………………… 50

Page 5: The Algotar Project

1

Project Summary

The Algotar Project attempts to create a link between the areas of guitar-based popular music and

computer-aided algorithmic composition. This is achieved by the use of a guitar-mounted iPod

touch as a wireless control interface for an algorithmic composition application. The iPod acts as a

mediator between the differing musical environments concerned. By integrating the device into the

guitarist’s performance, and taking advantage of techniques inherent in the existing playing style, it

achieves a non-intrusive extension to the guitar’s system.

The audio processing software concerned seeks to use the live signal from the guitar as the source

for algorithmic music – creating a compromise between the concepts of determinacy associated

with traditional Western music, and computer generated unpredictability related to certain areas of

computer music.

Page 6: The Algotar Project

2

1. Introduction and Objectives

1.1 General Introduction

This paper discusses the main concepts and current progress in The Algotar Project. The project

deals with the development of an augmented guitar system, which uses an iPod touch1 to wirelessly

control computer-based audio manipulation software. The main function of this software is to

sample the live audio signal from the guitar, and use this as a source to create new musical content,

based on a set of defined rules or algorithms.

1.2 Background and Motivation

Many musicians turn to technology as a means of injecting new inspiration into their performances

or compositions. For guitarists, this often means experimenting with new effects pedals, electronic

modifications, or extended playing techniques. For example, this could mean using a pitch shifter

pedal to create new patterns in the sound, or replacing the guitar pick with an Ebow2 to reconstruct

the instrument’s Attack Decay Sustain and Release (ADSR) characteristics. Perry Cook attributed

the popularity of such creative measures to the presence of spare bandwidth in the musician’s

technique (Cook 2001). What this means, is that the musician is so familiar with the technical

processes involved in playing their music, that they can afford to direct some of their attention

elsewhere.

The author’s reason for investigating such an area is largely due to its musical relevance on an

empirical level. Self-imposed glass ceilings often stagnate a musician’s advancement to a higher

level of technical skill. Hitting a brick-wall of intermediacy, however, can lead to the procurement

of a certain comfort in established techniques. This comfort has, in turn, led to experimentation with

the use of additional interfaces, in order to manipulate the sound produced by an electric guitar.

Available resources have a large impact on the direction of a particular study. It is for this reason

that the iPod touch came to play an integral part in this guitar augmentation. Familiarity with the

interface and its capabilities provide a distinct advantage in the development of a system that could

be easily controlled while playing the guitar. The purpose of this system is to send messages over a

wireless network, in order to control Digital Signal Processing (DSP) methods on a laptop

1 http://www.apple.com/ipodtouch/ 2 http://www.ebow.com

Page 7: The Algotar Project

3

computer. This processing occurs in Max/MSP3 – an object oriented programming environment for

music and multimedia. The function of such processing is to provide extended musical possibilities

by creating new musical content, based on the sounds produced by the guitar. This new musical

content will be controlled by a combination of the musician’s interaction with the iPod and the

processes detected within the musical content itself.

Such an amalgamation of traditional playing techniques and computer-assisted compositional

methods attempts to draw the two fields closer together. A divide still seems to exist between the

music of popular culture and academia – this is understandable, given the different situations that

surround their creation and consumption. What this has created, however, is a separation between

the two musical personae of a typical student of music technology. These two alter egos are: that

which exists within the academic framework, and the musical profile that the student practices

socially. For the author, this is an attempt to link these styles, and apply a practical use for such

complex technologies by adding them to the performing palette of an average musician.

1.3 Proposed objectives

As this project spans several topics, there are a number of objectives in mind for the final result.

Some of these goals were considered from the beginning of the project – other goals developed as

the research into the project proposed new opportunities. The following is a list of goals set out for

the project, following some investigation into the area:

• Create a link between a tangible acoustic instrument and a processing system, which

compliments the musician's established techniques – The aim here is to connect the guitar to

the software environment. By allowing control of both the initial sound of the guitar and the

manipulative parameters of the computer from a single location, the guitarist’s focus can be

kept on performance, and not navigation of extensive control systems.

• Extend the performance capabilities of a guitarist by allowing gestural and sonic

parameters to contribute to the audio processing – The performance can be further

benefited by minimising the amount of attention that the controls need. By incorporating

controls into inherent features of the performance, such as the movement of the guitar, or the

3 http://cycling74.com/products/maxmspjitter/

Page 8: The Algotar Project

4

creation of new melodic content, there can be less separation between the guitarist’s roles as

performer and engineer.

• Create a cohesive system that combines performance techniques from two contrasting

schools of musical thought – The system will consider, not only the guitarist’s ability to play

guitar in a traditional way, but also the ability to interface with a computer system, which

digitally alters the sound. The author intends to bring these methods closer, so that they can

become intertwined in a single performance. The natural delivery of this performance

depends on the ability of this project to reduce the perception of the guitar and the iPod as

two separate entities, and replace them with the notion of one performance system.

• Amalgamate methods of traditional, structured performance and algorithmic composition –

The traditional performance methods of composed electric guitar music and algorithmic

computer music differ greatly. This project attempts to find a way for these methods to

compliment each other within a partially composed performance, without compromising the

integrity of either approach.

• Explore the use of live input as a sound source for algorithmic composition – A large

proportion of algorithmic music is based on the use of computer-generated sounds. While

this is an effective method, and often very aesthetically pleasing, the approach of using more

natural, evolving sounds may present different, and possibly, more interesting results.

• Examine extended guitar techniques for performing, and evaluate the effectiveness of the

implementation of a supplementary interface – The possibilities of this augmentation will

need to be assessed, and appropriate extended techniques should be developed in order for

the system to be considered a valid performance option.

While the objectives above list the goals of the project itself, there are also some principles to be

considered throughout the development process.

The intention of this work is to design a project, not a product. In this way, the emphasis is placed

on project-specific goals and not future reconfigurability. This does not mean that the project is in a

closed box that cannot be edited – it simply focuses on the task at hand, without getting bloated

with notions of extending the system beyond its set objectives.

Page 9: The Algotar Project

5

This project attempts to explore two contrasting genres of music and methods of musical creation.

These methods are to be combined into a system, where the guitarist controls the creation of the

sound and the state of manipulative parameters, but unpredictable movements through various sets

of rules can determine the eventual path of the algorithmically composed output.

1.4 Requirements to achieve these Objectives

Rather than dive wildly into attempting to achieve these goals, it is necessary to make a structured

plan of what needs to be done to research best methods for these objectives. An outline of the tasks

needed follows:

• Examine common guitar augmentation practice – By researching the approaches that others

have taken to similar augmentations, it is possible to avoid certain mistakes from the outset.

Of particular importance here is research on guitar augmentations that specifically use an

iPhone or iPod touch as an additional control mechanism.

• Consider widely accepted instrument design principles – Comparing the proposed

development to a list of principles that have been suggested for instrument design can help

to point out problems that may otherwise go overlooked.

• Consider User Interface design principles when creating interfaces – There exist distinct

guidelines, which advise on issues to consider when creating interfaces for computer

programs. These must be considered in order to create an intuitive and useable program,

upon which the system will be based.

• Devise suitable mapping systems for data received from iPod – There is no use in having

input data sent over the network to reflect the user’s actions, if the data is not properly

interpreted, and applied to suitable functions. The mapping of input data is no small task,

and requires due consideration.

• Investigate best methods for sampling and redeploying live source in accordance with

compositional rules – Several approaches could be taken to carry out the task of recording,

manipulating, and then reusing the sounds created by the guitar. The most efficient and

latency-free method must be discovered, in order to accommodate natural performance.

Page 10: The Algotar Project

6

(CPU must be considered here – redundancy must be avoided in order to allow for the most

intuitive processing)

• Develop a set of compositional rules, which will govern the performance – The specific

rules employed by the generative system can have a great effect on the overall sound of a

performance, and for this reason, quite a deal of attention must be given to this area.

Page 11: The Algotar Project

7

2. Project Justification

2.1 Why use the iPod touch?

In order to avoid accusations of jumping on a technological bandwagon, it may be necessary to

explain the reasoning behind the decision to use the iPod touch as an additional interface.

While modules in Human Computer Interaction (HCI) may provide a grounding in principles for

computer interface design, this introductory level material is scarcely a match for the millions of

dollars that Apple invests in research and development procedures (McLean 2008). The results

provided by this commercially successful interface far exceed any custom-made interface

achievable within the scope of this project.

The iPod touch has become a familiar interface for many people (Figure 1). Many iPod

applications (apps) are laid out in a relatively homogenous fashion, which presents the user with a

common, predictable interface. With this familiarity, comes less intimidation when faced with the

use of an interactive program to perform tasks that may otherwise seem complicated.

The built-in accelerometer in the iPod touch allows for the use of gestural data as a control

mechanism.

The multi-touch screen allows for the control of several different notes/parameters at once. This

makes efficient use of the musician’s interactions.

WiFi capabilities make communication with a server much easier, especially in the context of a

performance. The supplementary interface is a part of the original instrument – as opposed to the

situation, for example, with a MIDI keyboard or laptop – which demand their own location on a

stage, to which a musician must constantly return, or even remain situated in.

Page 12: The Algotar Project

8

Figure 1 - iPod Touch

The high-resolution screen provides ample feedback for both performance consequences and the

state of certain processes, which are occurring both on the device and on the computer.

Assistive applications, such as IP scanners, provide significant support for setting up the

application.

The device itself is small and lightweight, meaning that it will not impose on the performance. It

also means that the device can be removed easily, and does not have to remain a part of the guitar.

As previously mentioned, available resources affect the direction of research decisions. This is the

reason that an iPod touch was used as opposed to an iPhone – simply due to the fact that the author

had easier access to an iPod. The features concerned are the same on both devices, and hence they

are interchangeable for the purposes of this project.

2.2 Why use electric guitar?

The electric guitar has become ubiquitous thanks to its acceptance into popular culture. Mass

production of guitars, at a low cost to consumers, has led to the current stage, where there are

approximately fifty million guitarists worldwide (Millard 2004). The guitar, then, has become a

familiar interface, similar to the iPod.

Page 13: The Algotar Project

9

The development of the guitar from Spanish classical to Western steel-string acoustics, and on to

electric and even digital guitars has solidified its classification as a hybrid instrument (Carfoot

2006). The guitar is built on developments in technology, and hence is inextricably connected with

technological change. Each change in guitar design marks another augmentation, which attempts to

improve the instrument’s capabilities.

The electric guitar is rarely encountered as an instrument independent unto itself – it is usually part

of a network of different modules, which make up a performance system (Lähdeoja 2008). This

system is comprised of the effects pedals, amplifier, cables etc. Each of these modules needs a real-

world location, in which they take up space. To access these, the guitarist must be able to navigate

to these locations while playing. For example, the effects pedals are an extension of the guitar, but

the guitarist still needs to be able to access the pedals on the floor with their feet while playing. This

project attempts to reconfigure the musician's environment, and re-design their performance space

and how they use it - reduce the physical spread of the modules in the guitar system. The

implementation of an iPod touch is simply another abstraction of the control process.

2.3 Why create algorithmic music?

In attempting to bridge the gap between popular music and computer music, it is essential to

maintain some of the characteristics that define the two.

By composing according to defined algorithms, the musician can define the initial characteristics of

the sound by outlining particular rules, which will affect the eventual delivery of the musical

content. Within the interactions with these rules, however, the user has little control. The outcome

will essentially be determined to a certain degree, but the particular order of notes will be random.

In this way, there is a natural progression to the sound, which is attributed to the calculations of the

computer, but still compliments a compositional form as recognised by the musician. This

maintains a balance between strict compositional structure and unpredictable random generation.

Page 14: The Algotar Project

10

3. Research

3.1 Introduction

This is an area that has been widely researched and experimented with. Many informal sources,

such as YouTube videos, indicate an interest in the area of guitar augmentation. As this is a

relatively new and technologically related area, the Internet has proven to be a source of much of

the information relevant to this project. These sources include online conference papers, websites,

blogs, videos and tutorials.

3.2 Previous Work

For the purpose of classifying relevant research for this project, it has been divided up into three

separate areas:

• Guitar Augmentation – Electric guitar modifications relating to the integration of additional

interfaces, in particular the iPhone/ iPod touch.

• Spectral Processing – Sampling, analysing and manipulating audio signals.

• Algorithmic Composition – Software methods of composing music with computer

algorithms.

There are numerous examples of projects from each of these areas that could be discussed, but for

the purpose of this section, two particularly relevant examples of each will be presented to

demonstrate the context that surrounds this project.

3.2.1a Guitar Augmentation: iTouch 2.0 – Brian Green

One example of the use of an iPod touch in conjunction with an electric guitar is a project by sound

artist Brian Green called “iTouch Guitar 2.0” (Green 2009). In this project, Green attaches two

iPods to the surface of his guitar. The devices are used to communicate wirelessly with a laptop.

The two units can be seen mounted on the guitar in Figure 2. One of the units is used to control

transport functions such as record, loop and playback in Ableton Live 4.

4 http://www.ableton.com/live

Page 15: The Algotar Project

11

The other device is running Brian Eno’s generative music app “Bloom” – discussed in more detail

later.

Figure 2 - Brian Green's iTouch 2.0

With this project, Green creates ambient music by combining the computer-generated sounds of the

iPod app with the augmented guitar techniques he employs through the use of a violin bow to excite

the strings. This is an obvious example of how new sounds can be created through the

amalgamation of different compositional approaches within the same system.

A particular feature that is highlighted by Green is the use of the accelerometer to incur

functionality in the app – in this case, clearing note presets. This demonstrates how the iPod’s

accelerometer can be used to take advantage of movements of the musician to affect the state of the

program.

3.2.1b Guitar Augmentation: Hexaphonic guitar for Spatial Performance – Bates et al.

In the paper “Adapting polyphonic pickup technology for spatial music performance” (Bates et al

2008), the authors describe a method for separating the audio signals from each string of an electric

guitar. This was achieved by using a divided (MIDI) pickup, and a breakout box. The breakout box

separated the signals of the six strings, and input each of them individually into a computer. This

Page 16: The Algotar Project

12

separation allows for different manipulative processes to be applied to each string, essentially

treating them as individual instrument signals unto themselves. The six signals were panned

separately, to create a spatialised array of sounds. Several pieces were composed specifically for

spatialised performance with hexaphonic guitar. This paper suggests an overall direction for the

implementation of a hexaphonic guitar system, and informed the early stages of The Algotar

Project.

3.2.2a Audio Processing: Max/MSP Hexaphonic Toolbox - Reboursière et al.

In the paper “Multimodal Guitar: A Toolbox For Augmented Guitar Performances” (Reboursière et

al 2010), from the 2010 NIME5 proceedings, the authors discuss the development of a Max/MSP

toolbox for augmented guitar. The toolbox is a collection of patches, which provide common tools

for working with hexaphonic guitar. These tools target three main areas: audio synthesis, audio

analysis, and gestural control. The latter two of these tools are particularly relevant to this project,

and will be explored in further detail in later chapters.

Some of the specific tasks facilitated by the multimodal toolbox are: “polyphonic pitch estimation,

fret-board visualization and grouping, pressure sensing, modal synthesis, infinite sustain,

rearranging looping and “smart" harmonizing.” (Reboursière et al 2010). The most relevant of these

tools to the goals of this project is the one concerning polyphonic pitch estimation. For this reason,

much of the early work in pitch estimation for this project is based on the work already carried out

by Reboursière et al.

3.2.2b Audio Processing: Spectral Sound Processing using Max/MSP- Jean Francois Charles

Jean Francois Charles is a composer and live electronics designer who has carried out extensive

research into spectral processing and, in particular, time freezing of samples. This is achieved by

implementing several Fast Fourier Transform (FFT) techniques in Max/MSP. His work has been

published in the Computer Music Journal (Charles 2008), as well as example patches being made

available on the share page for Max/MSP on the Cycling 74 website6.

5 http://www.nime.org/ 6 http://cycling74.com/share.html

Page 17: The Algotar Project

13

Using the methods that Charles outlines, it is possible to freeze a single frame of a sample so that it

will play endlessly. This allows the user to take a certain section of live audio, freeze and sample it

– essentially creating a new sound source, which can then be transposed and re-used in accordance

with whatever principles the musician decides to apply.

The applications of the sampling techniques that Charles shows are more basic than the complex

system of re-sampling and transposition necessary for The Algotar Project, but they form a strong

basis, upon which further experimentation can be carried out.

3.2.3a Algorithmic Composition: Experiments in Musical Intelligence – David Cope

David Cope is a classical composer who has developed software for computer-aided algorithmic

composition (CAAC). Cope’s software, Experiments in Musical Intelligence (Emmy), could create

classical compositions in the style of famous composers (Cope 2000). This was achieved by loading

a database of works from classical composers such as Bach, Mozart, or Vivaldi into a computer.

The computer would then analyse these pieces, and identify common techniques and compositional

approaches that were important to the style of a particular composer. By analysing the style in this

way, it was possible for the computer to then generate similar compositions, based on the original

works. Cope created over 5,000 Bach-style chorales in this way in a matter of minutes.

With his newest software, Emily Howell, Cope has gone beyond the analysis of just the work of

other composers, and has loaded his own style into the software. Cope will ask the computer a

question about what compositional approaches he should take for a particular piece, and the

software will respond with a musical suggestion. Cope will then either accept or reject this idea

from Emily, hence informing the computer about his stylistic preferences. In this way, the software

is constantly learning about Cope’s style, and works towards idea of a self-aware system of artificial

creativity (Saenz 2009).

The ideas that Cope proposes have provided significant inspiration for the direction of this project.

The prospect of a composer having a database of their own ideas – from which they can capture

inspiration in their unique style – suggests a much more informed compositional approach, where

musicians can explore aspects of their own style that they had not previously considered. This

project attempts to perform a similar, albeit much more basic, analysis of the musician’s style based

Page 18: The Algotar Project

14

on the music they play throughout a performance.

3.2.3b Algorithmic Composition: Bloom - Brian Eno & Peter Chilvers

Brian Eno and Peter Chilvers recently released an iPhone app, “Bloom”, which brings algorithmic

music to an easily accessible level (Eno and Chilvers 2010). Eno, a pioneer of modern Generative

Music, presents the user with a simple interface, where the screen represents a canvas; when the

user touches anywhere on this canvas, a note is generated. The pitch of this note depends on its

location on the Y-axis.

This project demonstrates a considerable amount of usability. The behaviour of the sounds is

dependant on the presence of multiple notes, so they can interact with one another. The affect that

the notes have on each other is minor, but as the sequence that the user generates passes through

more iterations, the overall sound begins to gradually change, taking on a different direction to that

which was initially created by the user.

This shows that generative and algorithmic music need not exist only within the boundaries of

academia, and can be applied in an intuitive and easy-to-understand way.

3.3 Research Undertaken

Beyond the research into the specific projects mentioned above, broader research has been carried

out in order to assess the necessary steps to put The Algotar Project into practice.

3.3.1 Guitar Augmentation

A broad study of guitar augmentation methods can present a good insight into the possible

practices that may prove successful in this project. In order to evaluate the success of these

methods, however, it is necessary to compare the examined projects to a set of defined principles,

which are accepted as a valid guideline. For this purpose, in a study that ran parallel to this project,

the author compared a number of different guitar augmentations to some key principles for

designing computer music controllers (Cook 2001). The Principles that were most relevant to this

particular project are outlined and explained below.

Page 19: The Algotar Project

15

1. Some players have spare bandwidth, some do not.

A common trait among many guitarists is to reach a point of intermediate skill and not progress any

further. This lack of technical development means that the musician is spending more time

rehearsing over the same techniques in different ways. What this rehearsal builds, instead of

virtuosity, is competency in the level of technique that the musician has already acquired. In this

way, over time, the same musical tasks require less attention due to familiarity with the processes

involved. This is where the spare bandwidth exists. The guitarist does not need to dedicate their full

attention to what they are fretting, or how they are picking – this leaves ample cognitive room to

take on other tasks.

This can often represent the guitarist’s next step in technical development. Instead of moving

upward in the development of more advanced traditional playing technique, the guitarist moves

parallel to his/her current level of skill by controlling other processes simultaneously.

This has been seen for a long time in the way guitarists control floor-based effects pedals (Lähdeoja

2008). However, with advances in technology, the popularity of extended hardware integration on

the guitar itself has increased. These augmentations often present themselves as pre-existing games

devices mounted on the guitar to add increased control over computer-based manipulative

processes.

The most common technologies used for such integrations are the Nintendo Wii Remote7 and the

iPhone/ iPod touch (Bouillot et al 2008). The advantages presented by such systems include their

wireless communication capabilities (Bluetooth or WiFi), inbuilt accelerometers, and additional

buttons/ interface control. When mounted on the surface of a guitar, these devices afford instant,

unimposing interaction for the guitarist. Essentially, the devices become part of the guitar, and the

guitarist’s interactions with the new interfaces can simply be a part of a larger performance.

When the guitarist does not need to worry about the position of the controller, it is easier to make

use of the gaps in original playing technique, in order to control the output parameters. This is

similar to the use of a traditional tremolo arm. The playing style does not need to be affected by the

introduction of the tremolo arm, as the guitarist has developed his/her technique around the

knowledge of the position of the accessory. The musician is able to create certain affordances

7 http://www.nintendo.com/wii/console/controllers

Page 20: The Algotar Project

16

within the performance, which allow for un-obtrusive interaction with the device.

2. Copying an instrument is dumb, leveraging expert technique is smart.

The idea of leveraging expert technique means that the guitarist can build a repertoire of additional

control functions around the common actions that are already present in their playing techniques.

This can further reduce the cognitive load required during a performance. Simple associations with

common movements that we make within the normal playing environment can make the musician

more comfortable and feel less affected by the requirements of the interface (Ängeslevä et al 2003).

Lähdeoja (2008) described some relevant associations between common performance actions and

extended consequences. Here, the associations are created by analysing the output signal from the

guitar with a hexaphonic pickup – a pickup that has six different transducers, one to detect the

signal of each guitar string individually. This allows for complete polyphonic detection of pitch,

meaning that the system can pick up on specific chords and patterns. One of the interesting

mappings applied in this project is the association of detected bend sounds to a wah effect. This

means that every time the guitarist bends a note while playing, this will be detected by the software,

and a wah effect will be activated. This simple association creates a much more intuitive method of

implementing the effect. By turning the function into an association with an inherent technique,

there is one less element of distraction, and the guitarist can concentrate on controlling the sound

entirely from one location.

Another mapping of this type from the same project stems from the analysis of spectral content of

the input signal. There is a control sample stored in the system, which represents the average

spectral content of a guitar sound. The input signal is compared to that data from the control

sample, in order to assess the spectral constituents of the sound that is being played. When the

guitarist palm-mutes the strings with their right hand, the amount of higher frequencies in this

signal are greatly reduced. This is seen as a trigger for a distortion effect, and the level of reduction

in high-end frequencies is dynamically mapped to the overdrive level on the effect. Again, this

takes an action that normally exists independent of the playing technique and integrates it, reducing

the cognitive load for the performer.

3. Make a piece, not an instrument or controller.

Many projects in this field do not draw a conclusive line under the development process and fully

explore the musical aesthetics afforded by the augmentation. This leads to what Andrew Schloss

Page 21: The Algotar Project

17

describes as confusion between the designer’s role as programmer and musician.

“People who perform should be performers. A computer music concert is not an

excuse/opportunity for a computer programmer to finally be on stage. Does his/her presence

enhance the performance or hinder it?”

(Schloss 2003).

Michel Waisvisz approached this problem with his electronic instrument “The Hands” (Bongers

2007). Waisvizs noted that there is a point where the designer should stop changing the instrument,

and focus on playing it. It is for this reason that he stopped all development on his instrument

twenty ears ago, and has spent the time since developing his mastery in playing the instrument.

This is also evident for modern guitar augmentations. The use of a Wii remote allows for gesture

data to be integrated into a performance by using the accelerometer. However, many examples of

Wii-based augmentations online do not devote due care to the practical applications afforded by

such systems. One particular example that demonstrates performance-driven design is the pitch-

bend mapping employed by Rob Morris (Jones 2008). With a Wii remote attached to the front of

the guitar, the movement of the headstock takes on new meaning as a controller of effects

parameters. This functionality is integrated seamlessly into a performance, as opposed to some

examples, which seem to place more emphasis on creating the system than actually developing its

use.

4. Smart instruments are often not smart/ Programmability is a curse.

This is particularly relevant to the section of this project that deals with the iPod development. It is

easy to get carried away with the potential of certain systems, and attempt to build a project beyond

the extent of what is relevant. When designing interactive systems, usability must be considered.

The more layers of functionality present in a program, the more confusing it becomes for the user.

For this reason, this project attempts to build a project-specific program, which is refined to suit its

specific needs.

Page 22: The Algotar Project

18

3.3.2 Networking Software

A number of iOS8 software systems have been researched in order to assess their capabilities in

controlling Max/MSP remotely. These systems mainly consist of an iPod app, which acts as a front-

end for wireless communication with the laptop. In all cases, Max/MSP interprets the data sent from

the device using the [udpreceive] object.

c749 – This is a paid app that is specifically deigned for communication with Max/MSP. For this

reason, the objects displayed in the interface, once created, resemble their corresponding objects in

Max. For example, a number box is a rectangle with a triangle inside both in the app and in Max.

This system allows for quick communication with Max. The iPod interface can be dynamically

reconfigured in real-time using the [c74] external object for Max. Creating interfaces, however, is

quite complicated, and requires coding and knowledge of the object functions in order to plot out

the location of controls within the interface. This app seems rather limited and does not extend the

same intuitive control to the user as some other apps explored. The size of the objects in the

interface is also an issue, as they can often become obscured beneath the user’s finger, resulting in

no visual feedback as to their state.

MrMr10 – An open source software available for OS x and iOS. This allows great control over the

development process of any interface, and there are also example interfaces included in the app.

The app offers multiple interface pages/sections, a feature not included in the C74 app. This means

that there can be separate pages in the app to deal with different sections in the Max patch – a

significant advantage if the app is to be multi-functional and control, for example, effects, note

generation, looping etc. The fact that the code is available for the application makes it a beneficial

learning tool for developing apps that communicate with a server. There are some issues with this

app, however. There is a distinct latency problem that renders the app almost unusable for the

purpose of this project. It is not clear whether this is simply a settings issue or a problem within the

program itself. Also, the widgets, or objects, available for the interface are not as varied as with

other apps examined.

8 http://www.apple.com/ios/ 9 http://www.nr74.org/c74.html 10 http://mrmr.noisepages.com/

Page 23: The Algotar Project

19

TouchOSC11 – This is another paid app, which presents the user with a configurable interface.

Interfaces for this app are built in a separate interface builder program in Mac OS X. A number of

these interfaces can be saved to the device, and recalled according to the user’s needs. Of those

studied, this application presents the most complete example of network communication for musical

purposes. There are a number of preset interfaces built into the app to deal with different audio

applications such as Logic12 or Ableton Live. Some of the features of this app include X-Y pad

control, toggle buttons, momentary buttons, sliders and rotary dials.

All of the systems examined work by sending messages to a specific port on the laptop, which is

found at the IP address specified by the user. These messages are preceded by an identifier, which

tells the receiving application which control the values are coming from, e.g. ‘slider1/33’ might

mean that slider 1 has a value of 33. These applications also send constant XYZ values for the

device provided by the accelerometer.

A notable advantage of the TouchOSC system is the ability to communicate in both directions. This

means that any consequences of algorithmic processes carried out on the computer can be sent back

to the iPod. This information can be represented on the screen of the iPod, so that the musician is

constantly aware of what processes are taking place, as opposed to blindly trusting in the software

system.

3.3.3 Algorithmic Composition

Algorithmic composition, to a certain extent, can be traced back to composers such as Joseph

Haydn and Wolfgang Amadeus Mozart (Cope 2000). These composers, among others,

experimented with Musikalisches Würfelspiel (musical dice game), which entailed rolling a pair of

dice in order to determine which segment of a score to play next. In this way, a collection of several

hundred short ideas would be whittled down and combined to form one complete piece of music.

In its modern form, CAAC is clearly visible in the work of composers such as David Cope. CAAC

is a form of music that is based on interactions between notes and sets of rules. As the music

progresses, it diverges through different possibilities available within the set algorithms and evolves

accordingly.

11 http://hexler.net/software/touchosc 12 http://www.apple.com/logicstudio/

Page 24: The Algotar Project

20

CAAC has come under much criticism from the musical establishment in recent time (Blitstein

2010). The idea of automated creativity has been challenged on moral grounds, and the material that

computers produce has been described as “soulless” and “without depth of meaning”. In particular,

Doug Hofstadter, a leading researcher in cognitive science, has expressed concern at the prospect of

computers taking over the role of composer, and becoming the root of what he sees as human

processes.

“Things that touch me at my deepest core might be produced by mechanisms millions of times

simpler than the intricate biological machinery that gives rise to the human soul”

(Cope 2001)

However, David Cope argues that algorithmic systems are merely tools used by the composer, and

that the eventual compositional decisions are ultimately made by a human composer (Adams 2010).

The author is in agreement with this opinion, and considers CAAC systems to be useful tools for

exploring musical avenues that one might simply overlook.

Cope makes a strong argument in favour of CAAC by pointing out that software such as Emmy is

simply a tool to aid composing, in the same way that a shovel is a tool that aids in digging a hole.

“The programs are just extensions of me [the composer]. And why would I want to spend six

months or a year to get to a solution that I can find in a morning? I have spent nearly 60 years of my

life composing, half of it in traditional ways and half of it using technology. To go back would be

like trying to dig a hole with your fingers after the shovel has been made, or walking to Phoenix

when you can use a car."

(Adams 2010)

One particular piece of commercial software that is beneficial for understanding the nature of

algorithmic, and in particular generative, music is Nodal13. This software presents the user with a

graphical environment, where sound sources and notes are represented by particular shapes. The

shapes are laid out on a grid on the screen and connected together in series. The sounding of each

note depends on a signal coming from the note that preceded it, and a form of weighted distribution

determines the chances of this trigger being passed through the note.

13 http://www.csse.monash.edu.au/~cema/nodal/

Page 25: The Algotar Project

21

This software provides a visualization of the network of interactions that can be created in

generative music. The path of the sound has the ability to branch off into new sections depending on

how the preceding music has developed. It is this dependency that is key to the functioning of this

kind of algorithmic computer music.

An example of a Nodal network can be seen in Figure 3. The starting points of different players are

represented by the coloured triangles. This idea, of having separate nodes, from which mini-

networks extend, provides a suggestion for how a generative system might be approached in

MAX/MSP. As with many examples of generative music software, Nodal uses computer-generated

sounds as its source.

Figure 3 – Nodal Screenshot

Included in Appendix 2 is a screenshot of the previously mentioned generative application

“Bloom”. The interactions between notes are not visible, as seen with Nodal, but the state of the

note within its own lifecycle is visible. This is discussed briefly in the appendix.

Page 26: The Algotar Project

22

4. Networked Control – TouchOSC Development

The electric guitar can scarcely be considered on it's own. It is a system, a network of

interconnected modules - effects pedals, amplifiers, additional playing controllers etc. This network

has a physical presence, and the modules must attach themselves to an environment, an area in

which they dwell. The more effects pedals there are, the larger this area gets, meaning a larger area

with which the guitarist needs to interact. By packaging these capabilities into computer software,

and controlling them wirelessly, via a small, familiar interface, the musician's physical requirements

can be controlled.

One of the primary goals of this project is to develop a system that can be controlled wirelessly,

without the need to attend to a computer while playing. This allows the musician to dedicate much

more time to creative processes, rather than concentrating on a laptop. Of the networking

applications examined, it was decided that TouchOSC best suited the needs of this project due to its

flexible features and simple interface design tool.

4.1 Functions of Application

Before developing a final networking patch and interface, it is necessary to identify some of the

main functions that the system will attempt to address.

Tap tempo

In order to facilitate accurate audio synchronisation, it is necessary to use a master clock, which will

control the activation of events. The speed of this clock will be a multiple of the tempo. Including a

tap button in the interface will allow the user to dynamically alter the tempo to suit the performance.

This works by calculating the average time between a number of peaks played on the sixth string of

the guitar. This value is then used to calculate the beats per minute.

Tap to indicate key

Just as it may be necessary for the tempo to change during or at the beginning of a performance, it

is also necessary to control the key of the piece. Rather than calculating the key based on the notes

played and the relationships to certain scales, it was decided to simply alert the system that the next

note played will be the root note. In this way, the user will tap a button on the iPod, which will send

a message to the computer, telling it to listen for the next note to be played, and to set that note as

the root note for the key. This is a simple, yet effective way of defining the key remotely.

Page 27: The Algotar Project

23

Manipulate 6 x volume controls

One of the advantages of a hexaphonic system is that it allows for each string to be mixed

separately. In this way, the user can choose to mute a certain string if they do not want accidental

sounds to come out in the performance, or increase the volume on strings that are more important to

the sound. This is possible by using different faders on the iPod to control the volume of each string

individually. This is a significant advantage over standard mixing configurations.

Accelerometer control

When mounting a device on an instrument, it is wise to consider the extra capabilities that it will

afford. With an iPod on the body of the guitar, one can take advantage of the accelerometer to

measure movement of the guitar, and to detect performance gestures. TouchOSC can automatically

send three axis orientation data, permitting the calculation of movements such as tilting or shaking

the guitar.

Live input and Algorithmic output mixing controls

For this system to be of any use in a performance environment, it must be possible to control the

balance between the live sound and the algorithmic accompaniment. The composed sound is

intended to compliment the musical content that is being produced. However, it is also necessary to

maintain dynamics in a performance. For this reason, there is the need to implement control

mechanisms for the volume and panning of both live and algorithmic sound.

Turn on generative mode

It is also important to allow the system to continue to compose, without a dependency on constant

input from the musician. This function will start the composition and playback of notes, influenced

by the original parameters, but will essentially navigate its own way through the several algorithms

to deliver computer-composed content.

4.2 Interface Design

In designing the interface for this system, certain goals must be considered in order to maximize the

usability of the system. These goals seek to minimize the amount of distraction caused by the

interface, in an attempt to integrate the device as a natural part of the performance.

Page 28: The Algotar Project

24

Pages

Pages are a way of creating a multi-faceted environment within the app, by having different sections

to the interface. The use of separate pages for different types of controls allows for a simple

environment that the user can navigate through with little thought. For example, placing all setup

and initialisation controls on the first page will allow the user to access the most important

functions easily. Putting controls such as the EQ manipulation on a page of lower priority allows a

natural progression through the pages in order of importance.

Colours

Using a single colour for each control on a page allows for instant visual identification of which

page is being controlled. For example, the controls on the first page are primarily red, the second

page is green, and the third page is yellow. This is a very simple tactic, but it considers the attention

of the musician, and attempts to create less distraction by replacing the identification of an entire

layout of controls with the simple identification of one colour. This allows more concentration on

musical matters.

Space and size

One must remember that the very use of a touch interface means that some of the display will be

obscured during any interaction. To account for this fact, all objects on the interface have been

made large enough to be easily visible, and placed far enough apart so that they do not obstruct the

view of other controls.

Text

Text is a simple way of indicating the function of each control. During a performance, it is

important to not be distracted by having to remember which control does what. To avoid this

confusion, simple labels are placed beside each control.

Basic and simple layout

The layout of the interface is designed to simplify interactions between the user and a computer

system. This means that only necessary controls will be displayed, and they will be organized in a

familiar and simple way, similar to existing performance interfaces.

Page 29: The Algotar Project

25

Feedback

An advantage of the TouchOSC system is that it allows communication in both directions between

the iPod and the computer. This communication permits messages to be returned from the computer

when an action is taken, providing the musician with feedback as to the consequences of their

actions. The interface is designed to take advantage of this communication, and to display

information relating to algorithmic processes.

Figure 4 shows the main page for the TouchOSC interface that was created to serve the purposes

outlined above.

Figure 15 – TouchOSC interface (Page 1)

4.3 Communicating with Max/MSP

Routing

To communicate with Max/MSP, TouchOSC sends a series of messages over the wireless network

via UDP. These messages are coded with tags that indicate which object each message relates to.

For example, the values output from a fader object would follow a tag such as “/fader1”. In order to

extract the data from such messages, it is necessary to route each object separately with the [OSC-

Page 30: The Algotar Project

26

route] object. This object will identify the tags and output the values attached. In the case of the

fader object mentioned above, the [OSC-route] object would look like this: [OSC-route /fader1].

This will take the values of fader1 and send them out the object’s first outlet.

What must also be considered, when routing data such as this, is that each object has multiple tags

by which it is identified. If fader1 is contained on the second page then the “/fader1” tag will be

preceded by a “/2” tag. Therefore, the full tag would be “/2 /fader1”. This statement identifies an

object contained on page 2 that is named “fader1”. In the interest of developing a Max patch that

corresponds with the interface, it is necessary to group the data according to which page the objects

are on. To do this, there must be two stages of [OSC-route] objects – one to sort the pages from

each other and one to sort each object on that page from each other. This is demonstrated in Figure

5 below.

Figure 16 - [OSC-route] Object in Max/MSP

The only iPod data that does not require to be routed through a specific page is the accelerometer.

The user can choose to turn on the accelerometer so that it will send constant orientation values to

the computer. This data arrives through the [udpreceive] object with the tag “/accxyz”. There are

two steps that need to be taken in order to access the accelerometer data sent from the iPod. First,

one must identify the data by routing any messages with the tag “/accxyz” to a specific location. To

do this, our route object looks like this: [OSC-route /accxyz]. Contained within this message is a

packet with values for the x, y and z axes. In order to interpret this information, it must be

unpacked. Hence, the chain for accessing the accelerometer data is shown in Figure 6.

Page 31: The Algotar Project

27

Figure 17 - Routing accelerometer data in Max/MSP

Mapping

The remaining work in implementing the networked aspect of the system is simply to map the

values received to specific parameters. Most of these parameters have been outlined in the

description of the objects in the interface and what they do. However, some values, such as the

accelerometer data, remain open to a number of applications. These functions will be discussed in

later chapters.

Page 32: The Algotar Project

28

5. Hexaphonic Implementation

5.1 Hexaphonic Basics and Requirements

The word hexaphonic comes from the Greek ‘hex’ meaning six and ‘phonic’ meaning sound

(Wikipedia 2011). A hexaphonic guitar is a guitar that picks up the sound of each string separately.

The way this is usually implemented with electric guitar is with the use of a special pickup known

as a divided pickup. This is a kind of pickup that was originally intended for use with guitar

synthesizers. These pickups have six separate poles on their top surface, each of which acts as a

pickup unto itself, and transduces the sound of just the string under which it is situated (Figure 7).

These signals are then sent to an output controller attached to the guitar. This output is then usually

connected to a synthesizer, which will track the pitch of the note being played using a pitch to MIDI

converter.

Figure 18 - Divided pickup, showing individual poles under each string

Creating synthesized sounds, however, is not the concern of this project. The importance of

hexaphonic pickup technology to this project is that it provides the ability to analyse notes being

played on separate strings simultaneously, and also transmit audio. Essentially, this allows for

Page 33: The Algotar Project

29

polyphonic pitch estimation. With the ability to present a computer program with six inputs, one

can process the sounds in such a way as to identify chords, patterns, scales and even techniques and

inflections such as palm muting and bends. This can be achieved by a number of different methods

involving frequency analysis in Max/MSP.

In order to access the signals of the separate strings, however, one must first employ the use of a

breakout box. This is a converter box that takes the signals from the divided pickup through a 13

pin input, and outputs the signals for each of the six strings individually. Similar methods were

discussed in a paper on the use of polyphonic pickup technology for spatialised performance –

which used a Shadow divided pickup connected to a home-made breakout box (Bates et al 2008).

This system was used to record and perform a number of pieces for spatialised hexaphonic guitar,

where each string was panned separately.

Figure 19 - 13 Pin Signal from divided pickup

Figure 8 shows the pin map for the 13 pin output of the divided pickup. As the primary concern for

this project is simply separating the signals of each string, the only pins that we are concerned with

are pins 1-6 and pins 12 and 13. Pins 12 and 13 provide power to the pickup, which is supplied by a

9v battery inside the breakout box, shown in Figure 9.

Page 34: The Algotar Project

30

Figure 20 - Breakout Box

5.2 Inputting Audio, and Issues

Inputting these signals into the computer for analysis is the next step in the process. The first issue

here is boosting the signals from the pickup to a suitable level for input. The divided pickup is

designed to transmit pitch information for analysis only and, hence, is not adept at delivering audio

for performance purposes. This problem is solved by using a second battery, which amplifies the

signal to a level close to that of an ordinary pickup.

One initial problem that was encountered in inputting the six signals was finding an interface that

could accommodate the number of inputs needed. The solution to this issue was to create an

aggregate device in OS x. This binds several audio interfaces together and tells any software

programs one may be using to regard the sum of the inputs as being from the same device.

Essentially this allowed for the use of three stereo input interfaces as one six input device. The

audio quality and reliability of such an interface, however, was not found to be as good as a single

device with more input channels. An eight-channel interface was sourced for the final

implementation.

Page 35: The Algotar Project

31

Paul Rubenstein of Ubertar takes a different approach to hexaphonic pickups14. Rubenstein

develops hexaphonic pickups from scratch, which are much more similar to regular magnetic

pickups, and claim to offer a much truer tone. These are proposed as a solution for musicians who

are trying to access the characteristics of each string and act upon them separately. These pickups

are based on a more traditional design, which is built with sound quality in mind. This differs from

MIDI pickups, which are not intended to transmit audio signals for recording, rather just

information for processing, and hence are less concerned with the quality and level of the audio.

These pickups require alterations to the guitar, which was considered a disadvantage in the context

of this project and led to the divided pickups being used.

14 www.ubertar.com/hexaphonic/

Page 36: The Algotar Project

32

6. Input Analysis

6.1 Basic Principles

In building towards an algorithmic system that will intelligently interpret and compose guitar

music, one must first devise an efficient system for analysing the incoming audio. Hexaphonic

systems allow for each string to be analysed individually. This means that using a simple

monophonic pitch estimator such as [sigmund~] in Max/MSP, we can track the notes being played

on each string, and recommend further processing action in accordance with what is being played –

essentially, we end up with polyphonic pitch tracking. The [sigmund~] object will receive an audio

signal at its input – in this case, a signal from one particular guitar string. The object will then

analyse the signal, and output two values: a floating point number to represent the pitch of the input

as a MIDI note (0. – 127.), and another float to represent the current amplitude of the signal (usually

peaking at approximately 75.).

However, what one must remember is that the actual pitch of the notes being played is only a minor

part of a piece. It is the manner in which the notes are being played, and how they relate to each

other that is arguably the most significant contributor to how an overall piece sounds. Hence, what

is just as important as pitch tracking is being able to identify the attack of the sounds, the spectral

content, the duration of the note etc. This is what tells us whether a note was picked or hammered-

on; if there was a bend or vibrato present; whether the note was allowed to ring out or palm muted.

These are the kind of characteristics that make up the sound of a piece, and identifying each of these

characteristics can provide a much greater wealth of information about a performance.

Deciphering this information from sound alone, however, can be quite difficult. While conducting

experiments for this project, much time was dedicated to programming Max to calculate efficient

rates of change in amplitude, in order to track events such as palm muting or vibratos etc. As many

approaches to this kind of calculation are mere estimations, what has been found to be beneficial is

the introduction of a probability system. In this regard, one collects as much information about a

sound as possible, and puts it all together to give an educated estimation of what is happening.

For example, we can estimate the pitch of a note due to the relative accuracy of the [sigmund~]

object. We also know the current amplitude of the signal. It is possible to tell that the amplitude has

increased greatly because of the high positive value for rate of change in amplitude. This means that

the note has just been struck. In this example, we also know that the rate of change after striking the

note is decreasing rapidly, and the length of time that the amplitude has remained at an audible level

Page 37: The Algotar Project

33

is very small. This means that the note has a very sharp attack and decay. The spectral content of

the note is lower in some of the higher frequencies than one would normally expect. At this point it

seems fair to conclude that the note in the example has been palm muted.

The [bonk~] object also offers analysis of attacks, but it has not yet been implemented to any

significant extent in this project. The object provides values for intensity in 11 different frequency

bins, but does not clearly indicate what frequencies these bins contain. There is also a feature where

the user can make the object learn the attack of different instruments and then try to match them

when similar attacks are passed through again. The attack information is stored in a text file. This

was not found to be particularly accurate but continued experimentation is suggested – for example,

by recording several dozen attacks of a certain kind, and then several dozen of a different kind, and

proceeding to test the object for accuracy.

Some of the early audio analysis work of this project was based on the paper and patches of

Reboursière et al. (2010). These authors developed patches for similar applications with augmented

hexaphonic guitar.

6.2 Note Tracking

In the subpatch [pitch_abstraction_tuohy], the [sigmund~] object was used to calculate the pitch and

amplitude of incoming audio signals. As with many patches in this project, this subpatch was one of

six copies – one for each string. Through a series of [gate], [route] and [trigger] objects, this patch

sends information about the audio signal to the rest of the patches.

Measuring the notes on a standard tuned 22 fret guitar as MIDI values, the highest fretted note is

equal to a value of 86. For this reason, the patch only passes through notes below a value of 95 (to

allow for variations in tuning etc.) With regard to pitch, this patch sends out values for the initial

pitch on first attack (int), the precise current note (float), the whole number closest to the current

note (int), and the cents difference between the exact current note and closest whole note (float).

The cent value is of particular importance to the [tuner] subpatch, which acts as a string-specific

guitar tuner by estimating the difference between the current pitch and the closest whole note.

The amplitude values sent from the [pitch_abstraction_tuohy] subpatch are calculated by using the

amplitude value from the [sigmund~] object and subtracting the initial background amplitude

measured when the patch is first opened. Threshold values for amplitude are set in the main patch,

Page 38: The Algotar Project

34

which allow for the triggering of “note on/off” messages when the amplitude passes these

thresholds. For example, a “note on” message will be sent when the amplitude is above 40, and

“note off” will be sent when the amplitude drops below 20. The current amplitude value is sent as a

floating number to the rest of the patch. These amplitude values are used in the [colour] subpatch to

provide visual representation of the current note. In the main patch, the note will be displayed in a

number box. The colour of this box will be dictated by the current amplitude; if the amplitude is

above 50, the box will be red; between 40 and 50, the box will be green; and if the amplitude is

below 40, the note will no longer be reported, and the box will remain white. This is shown below

in Figure 10.

Figure 21 – Max/MSP Note & Amplitude Display

While the [sigmund~] object provides useful information as to the state of the input signal, the

accuracy of the object is not always reliable. For this reason, when calculating pitches for analysis,

it is necessary to employ a system of conditions and averaging that will allow for more accurate

calculations. This is takes place in the [What’sTheRealNote?] and [averagenote] subpatches. A new

note is reported when the pitch changes from one whole note to another, or when the rate of change

of amplitude is increasing. In this way, we can report new notes, not only when the string is picked,

but also when we bend, pull-off or hammer-on a note. This new note can be documented, but there

are still accuracy issues relating to the [sigmund~] object. Due to the nature of guitar notes to

produce harmonics depending on fretting patterns and positions, the [sigmund~] object will often

detect a note as being an octave of the note actually fretted by the musician. To combat this, the

note value is passed through the following statement:

if $i1 > ($i3 + 24) || $i1 == ($i2 + 12) || $i1 == ($i2 + 24) then $i2 else $i1

where i1 = current note, i2 = previous note, i3 = value of current string open note

Page 39: The Algotar Project

35

This allows us to filter out notes that may be miscalculated. From here, the note value is passed

through a [split] object, which will only take note values between the value for the open string and

the value for the note that is 24 frets above this value. For example, with standard tuning, our value

for an open first string would be 64. Therefore, the only notes that will be passed through the [split]

object will be those between 64 and 88.

Following this process, the note must go through further averaging, whereby the average pitch of a

note over its entire duration is calculated. This uses the [mean] object to take constant input of the

float value for the current note until a new note is registered. By performing this averaging, many of

the miscalculations that happen around the beginning of a note are avoided. This method of pitch

calculation, along with the various other conditions enforced by [if] objects, allows for accuracy far

beyond the grasps of the [sigmund~] object alone, and serves particularly well for the harmonic-

rich, constantly varying pitch trends associated with guitar sound.

This means, however, that the exact pitch is not registered in perfect real time, but rather, there is a

delay of several milliseconds between playing and accurately documenting a note. This is not seen

as a problem, as the primary role of such pitch calculation is to document values in a database in

order to further inform compositional methods – something that does not have to occur in perfect

real time.

6.3 Technique Tracking

Several methods have been examined in order to facilitate the estimation of playing techniques

based on the sonic qualities of the input signal. These range from analysis of varying rates of

change in amplitude, to the use of the [bonk~] object to compare frequency content across different

ranges to values recorded in control samples.

The amplitude values received from the [sigmund~] object are represented as a veritable stream of

constant data. This allows for simple calculation of the rate of change in amplitude. This rate of

change can be used to identify techniques by recognising the attack envelopes they create. For

example, palm-muting can be associated with a rapid reduction in amplitude, as the initial attack

creates a high peak, followed by a shark decline as the note is muted. The affect of this is a high

negative value for rate of change in amplitude, i.e. a very quick reduction in amplitude. In the case

of the patches used for this project, palm-muting is detected by recognising a rate of change of

Page 40: The Algotar Project

36

(< -1.2). A note that is being rung out (note muted, not followed quickly by another note) can be

detected in a similar manner. When a note is played and allowed to resonate naturally, there will be

a very steady rate of change in amplitude. This means that we can set a range of low negative values

for the rate of change, within which we can identify the note as being allowed to resonate naturally.

The values in this case were (> -0.4 && <-0.1).

It is also possible to identify a vibrato being applied to a note. A vibrato will often occur in the

middle of a note’s lifecycle, i.e. not at the very beginning when the note is struck, or at the very end

when the next note is being fretted. For this reason, the amplitude at the time of vibrato will most

likely be in the middle of the range. Therefore, we can estimate a vibrato by searching for a large

variation in pitch while the amplitude is within this range. For this project, the range for the

amplitudes is (>30. &&<50.). If the cent value for difference in pitch from the closest note is

(>20. || <-20.), then we can assume that a vibrato is occurring. In order to increase accuracy of

detection, the values undergo some averaging.

The advantage of this kind of analysis is that techniques inherent in a musician’s performance can

be used as triggers for events and manipulative controls. The capabilities of a system such as this

are entirely dependant on the mapping system employed. Reboursière et al. (2010) used similar

methods to map the detection of palm-muting to a distortion effect, and identification of string

bends to a wah control.

For this project, the initial intention of such analysis was to integrate the technique detection

capabilities of the software into the control of the performance. However, at the time of the project

software submission, these capabilities had not yet been honed to a level deemed acceptable to

govern the direction of a performance. As this project is considered to be a work in progress – and

continued experimentation and research is planned for the future – these are methods that will be

further investigated as the project pursues. An object of note, which has been experimented with

extensively throughout the course of this research – although not implemented in the final patches –

is the [analyzer~] object. This object can analyse the spectral content of an input signal and deliver

values for brightness, loudness and noisiness. It would appear that further experimentation with this

object could provide much more accurate technique tracking, which could extend to the detection of

open harmonics, fret muting, scraping, bending etc.

Page 41: The Algotar Project

37

6.4 Audio Sampling

In order to facilitate the use of the input audio as a source for the compositional algorithms, it is

necessary to first sample the audio. This is a process that takes various steps and conditions into

account in order to analyse and evaluate the audio from the guitar.

The basic principle behind the sampling process is that there is one buffer for each note that the

guitar might play. This is a range of 53 notes from a MIDI value of 36 (C1) to 88 (E5) – this is

equivalent to a range from an open note on the sixth string detuned by four semitones to the 24th fret

on a standard tuned first string. For each of these notes, there is a “stock” sample, which was

recorded using the same guitar as used for the rest of the project. These samples had their pitch,

duration, and amplitude tweaked using Melodyne15 in order to ensure that the original source

material was as reliable as possible. These samples, however, only act as the initial sources for the

compositional system. As the performance develops, the samples in the buffers are replaced by new

recordings, which offer a more accurate representation of the musician’s sound. As a musician’s

playing style can greatly influence the sounds they produce with an instrument, the recording

section of the patch is designed to embrace the idiosyncrasies captured within their performance.

This system works by counting how many times the musician plays each note. The more times a

note is played, the more significant it seems to be as part of the performance. This being the case, a

note is seen as deserving a unique recording, if it important to the performance – this way, the

compositional program becomes more tailored to the style being played. Each of the 53 notes has a

unique counter in the [note_counter] subpatch. If that note is played ten times, then the MIDI value

of the note will be sent to a list in the [ListToRecord] subpatch. The [ListToRecord] subpatch keeps

a list of all the notes that are due to be recorded. This list will then be referenced by the [Recording]

subpatch, and the next time a note from the list is detected in the input signal, the note will be

recorded. There is an intentional delay applied to the input signal in the [Recording] patch to allow

for the note to first be detected, then compared to the list of notes due to be recorded, and then (if

present in the list) the note will be recorded. The buffers that these notes are recorded into are

entirely separate from the buffers that the initial stock samples were loaded into – essentially, there

is a system of 53 buffers for playback, and six separate buffers for recording. In order to facilitate

multiple source recording, there must be six copies of the [Recording] subpatch – one dedicated to

15 http://www.celemony.com/cms/index.php?id=products_editor

Page 42: The Algotar Project

38

each string. For example, if a note from the list is detected on the first string, the input source from

the first string will be recorded until a new note is played or until a “note off” message is received.

In order for the recorded note to be accepted and replace the stock sample, however, it must pass a

number of checks. These checks ensure that the best quality sources are recorded for use with the

compositional system. The checks are as follows:

1. The duration of the sample must be greater than 200ms – [TimeCheck] subpatch

2. The average amplitude of the sample must be greater than 35. – [AvgAmpCheck] subpatch

3. The amplitude when recording must not be 0. – [RecOnAmpCheck] subpatch

4. The [record~] object must be active when the record message has been sent –

[RecOnRecCheck] subpatch

5. The running amplitude average must be greater than 30. – [Sensitive_Amp] subpatch

These parameters were all calculated from much testing of the capabilities of the system in

accurately identifying and recording notes. If all of the criteria are not met, the note recorded will

remain in the [Recording] buffer, but the note number will not be removed from the list of notes to

record. Essentially, this means that the note will continue to be recorded until a sample of suitable

quality has been attained.

Once a suitable sample has been recorded, the contents of the [Recording] buffer will be saved as an

.aif file. This file will be named after the string from which the sample was recorded. For example,

if a note of MIDI value 64 has been recorded from the first string, and deemed suitable by the tests,

it will be saved as “1.aif”. A message will then be sent to the suitable playback buffer (in this case,

buffer 64), which will load the saved audio file (“1.aif”) into this buffer, thus replacing the stock

sample that had been loaded initially. A summary of the steps in the recording process can be

outlined with the following example, relating to playing a note of MIDI value 64 on the first string:

• Note (64) played ten times

• Note added to list of notes to record

• Note played again on string 1

• Note recognized from list, input signal from string 1 recorded to [Recording] buffer 1

• Recording stopped when next note played on string 1 or “note off” message received for

current note

Page 43: The Algotar Project

39

• If recorded sample passed all quality checks, contents of [Recording] buffer 1 saved as

“1.aif”

• [Playback] buffer 64 receives message to read file “1.aif”

• Previous contents of [Playback] buffer 64 have now been replaced by most recently

recorded note of MIDI value 64

Page 44: The Algotar Project

40

7. Compositional Rules & Algorithms

The recording section of this project provides an ever-evolving database of audio, sampled from a

live input source. There is also a responsibility to employ compositional methods with suitable

structure and style, so that the recorded sources can act as a seamless addition to the existing

performance. The majority of rules for composition with this project are based entirely on data that

is interpreted from the guitar input. In this way, the style of what will be composed by the computer

is directly correlated with the patterns and style that can be detected from the musician’s playing.

7.1 Documenting Style

Having already established methods for pitch and amplitude estimation, and sample recording, it is

necessary to consider the documentation of performance data. By recording information about what

notes are played, in what order, and how often, we can attempt to compile a table of data that will

represent the style of a performance

This table is implemented in the form of a Markov Chain. A Markov chain is a probability system

“in which the likelihood of a future event is determined by the state of one or more events in the

immediate past.” (Roads 1996).

The Markov chain used here is based on the example given in the Max help file for the [anal]

object. The chain uses the [anal] and [prob] objects based inside the [Markov_note] subpatch. The

[anal] object will store values for the last pair of notes played, the current note, and the number of

times a transition has happened between these notes. For example, if one were to play an open E on

the first string, G on first string, then B on the second string, the note values for this would be 64,

67, 59. These notes are divided into pairs, and each pair encoded into one long number that will

contain both values, so in this case the two pairs would be represented as 8259, 8635. The pairs of

notes encoded in these numbers are 64, 67, and 67, 59. To encode pairs of numbers in this way, the

first number is multiplied by 128, and the second number is added to the result. For example, 64, 67

is encoded as follows:

64 * 128 = 8192

8192 + 67 = 8259

The number of times a particular pair of notes has been followed by a note is recorded as the third

argument in the output from the [anal] object. If the transition 64, 67 -> 59 happened 4 times, the

[anal] object would represent this as (8259 8365 4). This data is recorded for every note that is

Page 45: The Algotar Project

41

played by the musician, which allows for constant tracking of the path of pitches, and interpretation

of style based on the probability of playing particular notes and patterns.

These probability statistics are then fed into the [prob] object, which, when called upon, will output

a value based on the probabilities that have been indicated by the musician’s playing style.

The pitches of the notes are not the only values recorded in a manner that is based on probabilities.

The same method is used for interpreting patterns in the amplitude of notes being played. The

amplitude of each note is recorded and given a number from 0-5 depending on the control

amplitude value to which it is closest (40, 45, 50, 55, 60, 65). The probabilities of changing from

amplitude value 0-5 are then recorded as with the pitches. Recording this data allows for more

dynamic variation in the amplitude of the composed output – instead of simply having a set

amplitude for each note.

The probabilities for the time intervals between notes are hardwired into the patch. This approach

was taken, simply to ease the interpretational load expected of the system, and to reduce the number

of variables associated with the pool of data from which the compositional decisions would be

made. In later iterations of this project, it is certainly intended to employ the same dynamic

probability methods for the time intervals as were used for the pitch interpretation.

The hardwired probabilities for time intervals are based on the values for conventional note

durations in relation to the specific tempo set for the performance. An overall tempo is set in the

main patch, and from this, values are calculated for the duration of a semi-brieve, minim, crotchet,

quaver, semiquaver, and demisemiquaver. For example, when the tempo of a performance is

120bpm, (2000ms), the values are as follows (milliseconds):

1 = 2000, 1/2 = 1000, 1/4 = 500, 1/8 = 250, 1/16 = 125, 1/32 = 62.5

These intervals are represented as numbers from 1-6 – an interval of 1/4 note would be represented

by the number 3, etc. These intervals represent the amount of time that the system will wait when

one note has been generated, before another note is created.

There are two playing modes that will be detected by the system; Rhythm, and Melody. These will

be discussed further in later sections. For the moment, it is sufficient to state that the rhythmic

patterns to be expected when one is playing a melody would usually differ significantly from the

patterns when playing chords. For this reason, it would be wise to recommend different interval

values for when the composition is generating chords, to those used when generating melody.

Page 46: The Algotar Project

42

These probabilities can be seen in the following tables:

Table 1 - Interval probabilities for chord creation

CHORDS 1 2 3 4

1 3 3 1 0

2 2 5 2 0

3 0 2 5 1

4 0 1 2 5

Table 2 - Interval probabilities for melody creation

MELODY 1 2 3 4 5 6

1 0 1 4 5 0 0

2 1 1 6 5 0 0

3 0 1 6 8 7 0

4 0 0 2 10 5 0

5 0 0 2 5 10 3

6 0 0 1 6 8 4

As can be seen in Table 1, the probabilities used attempt to reduce unnecessary variation in

rhythms. By using high probabilities for each interval repeating, steady patterns can be created,

with variation introduced when the intervals skip between different values. Table 2 shows a much

more diverse pattern of probabilities, as the scope for melody creation must be much broader. The

same basic approach, however, is taken with the creation of melodies. For most intervals, the values

with the highest probabilities are the current value or those on either side of that value. There is also

a concentration of high probabilities in the middle section – this is an attempt to create regular

patterns, which introduce elements of variation as the performance progresses.

Similar methods of generating compositions based on probability tables in Max/MSP have been

exhibited by Akihiko Matsumoto (World News 2010). Matsumoto has made patches available to

create musical pieces in the style of Gregorian chant, and several composers such as John Adams

and Brian Eno.

Page 47: The Algotar Project

43

7.2 Comparison

Just as important as identifying which pitches are being played, and in what order, is the context in

which the notes are being played. The key and scale that the notes reside in can provide a wealth of

opportunity to structure chords and patterns. The system has been preloaded with 15 scales. These

scales provide a great deal of scope for variation within a performance. The scales set out in the

system are as follows:

Minor Pentatonic, Minor Blues, Ionian, Dorian, Phrygian, Lydian, Mixolydian, Aeolian, Harmonic

Minor, Melodic Minor, Locrian, Hindu, Gypsy, Hungarian Minor, Oriental.

These scales were chosen simply due to the stylistic preferences of the author. The system uses the

[Scale_Identify] subpatch to examine the notes being played by the musician, and compare them to

the notes in these scales. The notes present in a particular scale are calculated by taking the pitch

intervals from that scale, and adding them to the value for the key. For example, a minor pentatonic

scale contains the pitch intervals {0 3 5 7 10}. If the key for the performance is E, the MIDI value

for this is 4 (C = 0, C# = 1, etc.). Therefore, we can add 4 to each of the values indicated for the

minor pentatonic scale – {4 7 9 11 14}. If we modulus these numbers by 12, we can calculate the

base pitches for each note – 4 = E, 7 = G, 9 = A, 11 = B, 14 = D. Hence, we have calculated that

the notes in E minor pentatonic are E, G, A, B, D. These calculations are possible for each of the 15

scales, in any key – leading to a total of 180 variations in the notes recognised as being part of a

particular scale.

As the musician plays the notes, they are compared against the notes that are in the current scale. If

a note that is not in the current scale is played twice, then the system will add this note to the list of

recently played notes, and begin to compare these notes to the 15 scales, choosing whichever scale

most closely matches the notes indicated. This scale will then act as the basis for any chords that are

generated.

For each scale, the possible chord variations have been worked out using the [ChordCalculator]

subpatch. This will scan through a series of 47 chord variations for each note in the scale, and

determine which chords can be played to suit each particular scale. All of the chord variations

available for the scale are then saved, and will be called upon when the system is composing. For

example, if the current scale is E minor pentatonic and an E note is played, the chord variations that

this could produce are E min, E m7, E m11, E 7sus4, E5, E sus4.

Page 48: The Algotar Project

44

When the system is composing melodies, it will simply generate a note from the Markov chain, and

play that sample. However, when the system is composing chords, it will take the number generated

from the Markov chain and use that as the root note for a chord. The note will be sent to the

[ScaleNotes] subpatch, and then through to the [Chords] patch for the scale that is in use. If the note

is in the scale, a random selection will be made from the chord variations available for that

particular note. If the note is not in the scale, a random number will be generated to select a note

that is in the scale, and a random chord variation will be selected from here. For the purposes of this

project, the [ChordCalculator] subpatch and others have been used to inform the system of

compositional possibilities, when generating chords. However, with very little extra work, the same

patches could be used to suggest chords for the musician to play mid-performance, propose

alterations to a score based on common chord progressions, or simply calculate every possible

chord variation available within any specified scale. These tools could prove vital to any composer

who is working on a piece and is interested in quickly calculating the theory behind certain chords,

or discovering new melodic solutions so a particular musical question.

It is also possible to analyse patterns in the context of their relationship to saved melodies. The

ability to identify common riffs and phrases offers significant control over a performance, simply

due to the ability to employ complex event cues based on what is being played. The

[Pattern_Identify] subpatch stores a riff that has been coded into the patch. For the sake of

experimentation with these techniques, the first eight notes of the intro to Led Zeppelins song

Stairway to Heaven16, have been used as the sample phrase. When the musician is playing a

melody, the notes will be examined by the [Pattern_Identify] patch. If the first note of the riff is

detected, the first check will be passed, and the next note will be tested. If the second note matches

the note from the saved phrase, the third note will be tested etc. If all eight notes are played in the

correct order, the phrase is deemed to have matched successfully. If the saved phrase is detected, a

message is sent to a [movie] object to play a short clip from the 1992 film Wayne’s World17, where

the title character is interrupted when attempting to play the Stairway to Heaven intro in a guitar

store, and is pointed to a sign that reads “NO Stairway To Heaven”.

This application of the pattern detection algorithms is simply a basic form of experimentation with

the possible methods. The capabilities of the system, in this regard, have not yet been developed to 16 http://www.youtube.com/watch?v=w9TGj2jrJk8 17 http://www.youtube.com/watch?v=RD1KqbDdmuE

Page 49: The Algotar Project

45

a level of sufficient reliability to integrate into a performance. However, such abilities, should they

be further developed, could facilitate operations far more beneficial to a performance than the trivial

tasks explored here. For example, if the system could accurately detect a guitar riff that represented

the chorus of a song, this could then be used to cue events that would only happen during the chorus

– such as particular audio effects, sample playback, or even visual or lighting cues. Similarly, the

number of times a certain pattern has been matched could be recorded, hence allowing, for example,

only the fourth chorus to trigger an event, etc. These abilities suggest countless possibilities for the

control of any kind of parameters, simply from recognising what the musician is playing. This

implies a sort of ‘artificial intelligence’, however, the author would be very cautious in using this

term, as it suggests that the system understands what is being played. This is not considered to be

the case – the system simply carries out the specified tasks to test for a particular pattern, it can

neither understand what it is testing, or if it has been successful. Any ‘intelligence’ present is

considered to be hardwired and automated as opposed to artificial or self-aware.

7.3 Generation

By this point, the system is advanced enough to interpret what the musician is playing, and identify

techniques, styles and common patterns within a performance. In order to capitalise on these

abilities, there must be an intelligent system of note generation, in order to facilitate the

composition of performance-specific music that will compliment the live input of the guitarist.

This system works on a few basic rules:

• Compose only when there is an active input

• If active input is chords, compose melody

• If active input is melody, compose chords

• If active input stops, continue for one bar, then stop composing

An active input basically means that at least one string is creating a note, or audio signal that is of a

high enough amplitude for the sound source to be considered “on”. This allows for the

compositional system to act as a constant compliment to the live input, without continuing endlessly

on its own. This active input switch, however, can be artificially turned on from the iPod – this

allows for more dynamics in the performance, where, for example, the musician may want to play

intermittent phrases, but for the system to continue to generate chords.

Page 50: The Algotar Project

46

The [Rhythm_Or_Melody] subpatch estimates whether the musician is playing chords or melody.

This is achieved by estimating the average number of strings that are active at one time. If there are

more than two strings active on average, then it is assumed that the musician is playing chords,

otherwise, the style is detected as a melody. When the input is active, messages will be sent to the

[Create_Next_Note] subpatch at regular intervals according to the tempo set for the overall patch.

These messages will then be passed on according to the time intervals mentioned above in the

interval probability tables. Messages are sent from this patch to the [Markov_Note] patch

mentioned previously. Here, a new note value will be created based on the probability statistics

saved in the Markov chain. If the input mode from the musician is “Rhythm”, then a single note

will be generated from this chain of events. However, if the input mode is “Melody”, then a chord

will be generated, with the note from the Markov chain representing the root note for the chord.

The MIDI note values contained in the selected chord are sent from the [ScaleNotes] patch to the

[Markov_All_Chord] patch. Here, the notes are each sent to different [Playback] patches, where the

specified notes will be played from the necessary buffers.

A measure taken to ensure smooth playback of the samples is the use of six separate playback

buffers. The allocation of these buffers works differently from the recording buffers – it is not based

on which string the note is played on. By default, the system will attempt to play a note from buffer

1. However, if buffer 1 is busy, the request will be sent to buffer 2, and so on as far as buffer 6. It

has not yet happened in any experiments with the patch, that all playback buffers have been active

at one time.

An amplitude envelope is also applied to each new note to reduce clicks in playback. When a note

is created, the amplitude that has been determined by the Markov chain in the [Markov_Amp] patch

is packed together with the pitch value and sent to the specific [Playback] patch. When a new note

is received, the amplitude of the previous note will fade down to zero in 10ms. The amplitude of the

new note will then fade up to the decided amplitude in 10 ms. In this way, if a new note is requested

while a note is already playing in the buffer, there is a sufficient cross-fade to eliminate unwanted

noise.

Page 51: The Algotar Project

47

8. Performance Controls

It is necessary to set out certain guidelines for using this system as part of a performance. One of the

main intentions with the project is to integrate information that is inherent in the performance, as a

means of controlling processes that would otherwise require the musician to attend to the computer.

The initial setup of performance information such as key, tempo and tuning of the guitar is done

with the iPod. Using the controls shown previously in Figure 4, the musician indicates that the next

note to be played will be the root note that sets the key. By pressing the tempo button, the user

activates a ‘tap tempo’ system. With this control engaged, any signals sent from the sixth string will

create a ‘tap’ in the [BPM_Master] patch, allowing for the average tempo between several taps of

the string to be calculated. When one presses the tuning button, the system is informed that the next

note on each string is the open note. By playing all six strings unfretted, the system can store the

MIDI values and inform the rest of the patches as to how the guitar is tuned. These controls are an

important part of creating a compositional system that can be controlled from one location, with

little necessity to refer to the computer.

The iPod is mounted on the guitar using a Dunlop D65 iPhone Holder18. This allows for the iPod to

be conveniently positioned below the strings, so that it can provide easy access to the control

interface, as seen in Figure 11. During the initial setup, the user should press the ‘calibrate’ button

on the first page of iPod controls, once the position of the guitar has been suitably adjusted.

The output signals from the compositional processes, and the live performance signal, both pass

through a series of effects before they are sent to the speakers. These effects are an altered version

of those taken from the Max/MSP tutorial on building a guitar processor (Grosse 2011). Similar

effects are used for both the live input, and the compositional output. These effects can be seen in

Figure 12. Many of the controls for these processes are acted upon by the data from the iPod’s

accelerometer. In this way, the movement of the guitar in certain directions can control

manipulative parameters. These movements are detected by calculating the rate of change in

orientation of the iPod across three axes. Similar methods of controlling guitar effects were

explored in (Bouillot et al 2008), where the accelerometer data from a Wii remote was mapped in

Max/MSP.

18 http://www.jimdunlop.com/product/d65-SturdyStand

Page 52: The Algotar Project

48

Figure 11 - iPod touch mounted on guitar with Dunlop D65 (Uses suction cups)

The mapping of the accelerometer data is as follows:

• Shake up/down * 1 (Switch): Clear Recorded Buffers

This will clear any notes that were recorded from the live input throughout a performance,

and replace them with the stock samples originally stored in the buffers.

• Shake up/down * 2 (Switch): Clear Recorded Buffers & Markov Probability Table

This will clear the recorded samples, as above, and also clear any probability data stored in

the [anal] object in the [Markov_note] patch.

• Tilt Body Down (Switch): Reset scale

This will reset the current scale to minor pentatonic – this is useful when the detected scale

is producing chords that use notes that are not common in the melodic performance.

• Tilt Neck Down (Switch): Live loop record on/off

This will record a loop of one bar in duration, from the live input. This allows for the

layering of multiple loops, leading to a much more dynamic performance. This effect acts in

Page 53: The Algotar Project

49

a similar way to a Boss Loop Station19. Tilting the neck down by approximately 45 degrees

will switch recording on. Tilting the neck down again will switch recording off.

• Tilt Neck Down *2 (Switch): Live loop play on/off

The playback of the loop function can be controlled by tilting the neck down at a more

extreme angle (approximately 90 degrees). Tilting the neck in this way will stop loop

recording and turn off any loops that are currently playing.

• Tilt Body Up (Scale): Increase Reverb Level

Tilting the body of the guitar up towards the guitarist (so the pickups face the ceiling) will

increase the reverb level. This mapping allows for reverb to be increased in lulls in playing,

in order to emphasise certain notes.

• Tilt Neck up (Scale): Increase distortion level

Tilting the neck of the guitar up will increase the level of distortion applied to the live input.

This, again, allows for increased emphasis on certain parts of a performance.

Figure 12 – Max/MSP [FX] patch, showing Digital Signal Processing applied to live signal

19 http://www.bosscorp.co.jp/products/en/RC-20/

Page 54: The Algotar Project

50

The EQ of both the live input and any recorded loops can be controlled remotely from the XY pad

on the third page of the iPod interface, shown in Figure 13. The value for location on the X axis

will control the Mid frequency of the EQ, and the Y axis value is mapped to the mid range (Q).

Figure 13 – XY Pad on page three of iPod interface, controls EQ of live input and loop playback

Once the signals have passed through the effects processes, their volume and panning settings are

adjusted before they are output. This is controlled from the second page of the iPod interface, as

seen in Figure 14.

Figure 14 – Page 2 of iPod interface. Controls for volume, panning, metro on/off, Alg. Gen. on/off, Tuners on/off

Page 55: The Algotar Project

51

Controls on the second page of the iPod interface are as follows:

• Volume of Metro (MIDI piano plays root note of key on every beat)

• Metro on/off

• Volume of algorithmically generated output

• Alg. Gen. Mode on (Sets input as active, so system will continue to compose regardless of

input from guitar)

• Panning of algorithmically generated output

• Volume of live guitar output

• Open Tuners (Displays tuner for each string in main patch on computer)

• Panning of live guitar output

Page 56: The Algotar Project

52

9. Conclusions & Further work

9.1 Conclusion

This project was successful in exploring the musical elements set out as goals in the beginning.

Much of the knowledge acquired throughout the course of the project came from experimentation

with methods in Max/MSP rather than traditional research into previous approaches and theories.

The reason for this is largely due to the infancy of the field. Although algorithmic composition has

existed in different forms for centuries, the use of computer software to dynamically sample live

guitar input for use in a performance-based algorithmic compositional system has not been widely

documented.

One of the main intentions of this project is to analyse and interpret the music that a guitarist plays,

and to use this information to inform compositional methods which can supplement a performance,

or even suggest solutions to compositional stalemates. Much of the success of this project as a

performance tool lies in the ability to relinquish the musician from the responsibility of having to

attend to external control mechanisms. By integrating the entire control system into just what can be

found on the surface of the guitar – or indeed, in the notes the instrument produces – the musician is

given much more freedom to concentrate on questions of composition and performance, rather than

mixing and manipulation. Essentially, reducing the cognitive load placed on a musician during a

performance could place them in a better position to consider the musical suggestions that the

computer proposes – interacting with the computer, and developing ideas in tandem with one

another, as if they were just another pair of musicians.

Analysing our own musical style, and searching for solutions that we might, ourselves, suggest,

were we not blinded by habitual lethargy, could surely serve to improve our understanding of

composition and perception of beauty within music. Regarding the implications of computer-

assisted creativity, such as that explored in this project, David Cope stated:

“The feelings that we get from listening to music are something we produce; it’s not there in the

notes. It comes from emotional insight in each of us – the music is just the trigger.”

(Adams 2010)

Page 57: The Algotar Project

53

9.2 Further Work

To draw a line under the work that has been completed to date would not be true to the goals of The

Algotar Project. Many areas of composition and performance have been studied and experimented

with throughout this project, but the work completed has only scratched the surface of what is

possible with such a system.

The ability to use performance data as triggers for manipulative effects and event cues offers huge

opportunities in the field of performance – potentially allowing for total integration of system

controls and musical delivery. In particular, improvements in the area of pattern and phrase

detection could prove priceless in relation to event cues, and could serve particularly well for lone

performers, who work with samples and pre-recorded accompaniment.

Further development in compositional methods could present the system as a valuable tool in the

process of composing. At present, the analysis of style does not extend beyond the level of second

order Markov chains – this still leaves much room for random movement throughout a piece. By

employing higher order Markov chains, a more defined structure could be achieved, and a personal

style could be honed. It is not suggested that the system delve into the area of analysing other

composers, as the compositional content is intended to be entirely derivative of the composer’s own

style.

One aspect briefly explored in this project is the use of pitch tracking to record a MIDI file to

represent the live input from the guitar. In this way, the user’s guitar could act as a replacement for

a MIDI keyboard. This is nothing that has not been achieved with dedicated MIDI guitars, however,

what it does offer is the ability to use a regular electric guitar to perform tasks specific to MIDI. For

example, recording a performance to a MIDI file allows for instant transcription (or tabbing) of the

notes played. This could prove a great advantage to musician’s who wish to improvise in order to

create ideas, and later review a tab of exactly what they played. Much simpler applications are also

afforded, such as the use of the guitar to play MIDI instruments. Traditionally, a musician must be

adept at the use of a keyboard interface, in order to create music with virtual instruments. However,

if this system is further developed, it could offer a new avenue to composing with software

instruments. Layered string sections, brass fills, and piano phrases could all be at the disposal of a

standard electric guitar, which can simultaneously deliver its own natural sound.

Page 58: The Algotar Project

54

Index of Appendices Appendix 1 – Networking Applications Screenshots Appendix 2 – Bloom Screenshot Appendix 3 – Important Max/MSP Objects Used Appendix 4 – Important Max Subpatches created Appendix 5 – Hierarchy of Max Patches Appendix 6 – Necessary Max Objects Appendix 7 – System Screenshots

Page 59: The Algotar Project

55

Appendix 1 – Networking Applications Screenshots The following pages contain screenshots from the various networking applications that were examined as options for the system. Representations of iPod and Max/MSP interfaces are shown.

Appendix 1a - C74 iPod Screen

Appendix 1b – C74 Max/MSP Screen

Page 60: The Algotar Project

56

Appendix 1c – MrMr iPod Screen

Appendix 1d – MrMr Max/MSP Screen

Page 61: The Algotar Project

57

Appendix 1e – TouchOSC iPod Screen

Appendix 1f – Touch OSC Mac/MSP Screen

Page 62: The Algotar Project

58

Appendix 2 – Bloom Screenshot This image is a screenshot from Brian Eno’s generative music app “Bloom”

Appendix 2 – Bloom Screenshot Each circle represents a note. The size of the circle reflects its stage in its life cycle. A note begins as a small, concentrated circle but as it begins to decay, it spreads out and becomes a larger, more opaque circle. The colour of the circle represents the iteration in the generative series in which the note originated.

Page 63: The Algotar Project

59

Appendix 3 – Important Max/MSP Objects Used The following is a list of Max/MSP objects that are of particular importance to this project, and a brief description of their function. [%] Modulo operator Divide two numbers, output remainder [anal] Transition probability analysis correctly formatted for the 'prob' object Anal takes an incoming integer stream and analyses the transition probabilities between successive entries. It outputs a list of three numbers: state 1, state 2 and the current weight for a transition between the two states. [analyzer~] FFT-Based Perceptual Analysis. Pitch tracker based on fiddle~ from Miller Puckette. [bonk~] The bonk~ object takes an audio signal input and looks for "attacks" defined as sharp changes in the spectral envelope of the incoming sound. Optionally, and less reliably, you can have bonk~ check the attack against a collection of stored templates to try to guess which of two or more instruments was hit. Bonk is described theoretically in the 1998 ICMC proceedings, reprinted on crca.ucsd.edu/~msp. [buffer~] Store audio samples buffer~ works as a buffer of memory in which samples are stored to be saved, edited, or referenced in conjunction with many different objects, including play~ / groove~ (to play the buffer), record~ (records into the buffer), info~ (to report information about the buffer), peek~ (to write into/read from the buffer like the table object), lookup~ (to use the buffer for waveshaping), cycle~ (to specify a 512-point waveform), and wave~ (to specify a waveform). [delay~] delay~ delays a signal by a certain amount of time. This object uses the Max time format syntax; delay times can be either samples (determined by the sampling rate) or tempo-relative values. [if] Conditionally send messages Conditional statement in if/then/else form [mean] Find the running mean / average of a stream of numbers Calculates the mean (average) of all the numbers it has received [movie] Play QuickTime movies in a window Play QuickTime movies in a window (requires QuickTime)

Page 64: The Algotar Project

60

[OSC-route] Dispatch messages through an OpenSound Control address hierarchy with pattern matching OSC-route is modeled on Max's "route" object, but it uses slash-delimited (URL-style) OpenSound Control addresses. Each OSC-route object implements one node in the address tree. [prob] Build a transition table of probabilities, Make weighted random series of numbers Prob accepts lists of three numbers in its inlet. The third number represents the weight of the probability of going from states represented by the first two numbers. For example, 1 2 4 would mean that there is a weight of 4 in going from state 1 to state 2. When prob receives a bang, it makes a random jump from its current state to another state based on its current weighting of transitions. If a transition can be made, the new state is sent out the left outlet. If not, a bang is sent out the right outlet. For any particular state, the weights of all possible transition states are summed. Thus if a state could jump to three states that had weights of 3 4 and 1, the first one (3) would occur 37.5% of the time, the second 50% of the time, and the third 12.5%. Note that any state can make a transition to itself with a list of the form (state state weight). [random] Generate a random number [record~] Copy a signal into a buffer~ record~ records a signal into a buffer~ object. You can record up to 4 channels by specifying the optional argument. Record stops when it has filled up the buffer~ [route] route takes a message and tries to match its first argument to route's own arguments. The rightmost outlet passes any message that matched no other choice, so you may gang routes to get more choices. [sigmund~] Sinusoidal analysis and pitch tracking Sigmund~ analyzes an incoming sound into sinusoidal components, which may be reported individually or combined to form a pitch estimate. [split] Look for a range of numbers. Split takes an 'int' and puts it out the left outlet if the value is inclusively between the two argument values, otherwise it puts it out the right outlet. [sumlist] the sum of a list (sigma) [udpsend] / [udpreceive] Send max messages over the network using UDP. Both udpsend and udpreceive support the 'FullPacket' message used by the CNMAT OpenSoundControl external. This means they can be used as drop in replacements for the [otudp] object.

Page 65: The Algotar Project

61

Appendix 4 – Important Max Subpatches created The following list mentions some of the most important subpatches created for this project, and outlines their function. [Algotar_main] Main patch. Contains all other patches. Provides interface for system. [averagenote] Calculates the average pitch of a note over the total duration of the sound. Provides much more accurate results than other methods. [Chords] Outputs individual notes contained in any of 47 chord variations for any root note. [ChordCalculator] Calculates chord variations for specific scale. [colour] Adjusts the colour of the note display for each string depending on amplitude level. [compose] Contains patches for calculating all chords available in a particular scale. [ListToRecord] Stores a list of all notes that are due to be recorded into the buffers. [Markov_All_Chord] Receives individual notes of a chord to be played and sends values to separate playback buffers. [Markov_note] Generates a value for the pitch of a note to be played, based on a second order Markov chain (probabilities table). Based on example patch in [anal] help file. [note_counter] Counts how many times each note has been played since it was last recorded. [Pattern_Identify] Identifies melodic patterns (riffs) in guitar input. [pitch_abstraction_tuohy] Main pitch analysis patch. Calculates pitch and amplitude values for inputs. Based on patch by Reboursière, et al. (2010). [Playback] Plays back recorded sample based on note value generated by compositional system. [Recording] Records samples from live input. Saves notes that are played throughout the performance.

Page 66: The Algotar Project

62

[Rhythm_Or_Melody] Estimates whether guitarist is playing mostly chords or melody. [Scale_Identify] Estimates the current scale that is in use by comparing recent notes to a the notes contained in 15 set scales. [ScaleNotes] Takes a root note for a chord and outputs a chord variation based on the scale selected. [tuner] Guitar tuner patch. Provides string-specific guitar tuner. [What'sTheRealNote?] Calculates notes on an individual level. Outputs note value every time new note is picked, or pitch changes (bend, hammer-on etc.).

Page 67: The Algotar Project

63

Appendix 5 – Hierarchy of Max Patches The following list indicates the hierarchy of Max subpatches throughout the system Algotar_Main

Tuning Key BPM Master

{BPM} Rhythm Or Melody Any Note Change All Note Counters {Note Counter (36-88)} ReadAllBuffers {ReadBuffers (36-88)} Buffer Playing Buffer Recording Active Input All Params Markov Note Markov Amp Markov All Note {AllBufferTimes {BufferTime (36-88)}} Markov All Chord {AllBufferTimes (1-6) {BufferTime (36-88)}} Scale Identify {Bang Scales} {Scales Checker {Bang Scales (0-11)}} Pattern Identify Compose {Modes {Notes Print}} {Run Through} {Chords} {Scales Decider} {Scale Notes {Chord Calculator} {Scale Chords (0-14)} {Play} Create Next Note All Buffer Times Tuner (1-6) All Processes (1-6) {Attack Analysis {Bucket Average}} {Recording {Average Note}

Page 68: The Algotar Project

64

{Time Check} {Avg Amp Check} {Rec On Amp Check} {Rec on Rec Check} {Sensitive Amp Check} {Check All Params} {What's The Real Note?} {Play Params Check} {Send Read Message} {Clear List Num}} {List To Record} {Playback}} Colour (1-6) Pitch Abstraction (1-6) Into Touch {Page Rec (1-3)}

{Page Send (1-3)} {Rate Change XYZ}

Page 69: The Algotar Project

65

Appendix 6 – Necessary Max Objects The following is a list of Max objects that are necessary in order to run the patches contained in this project. [analyzer~] [bonk~] [OSC-route] [sigmund~] [sum-list]

Page 70: The Algotar Project

66

Appendix 7 – System Screenshots The following pages contain screenshots of some of the patches that make up the system.

Appendix 7a – [Algotar_Main] patch, the main interface created in Max/MSP

Appendix 7b – [Scale_Detect] patch, identifies current scale being played

Page 71: The Algotar Project

67

Appendix 7c – [Markov_Note] patch, generates a new note based on the input data provided to the Markov chain

Page 72: The Algotar Project

68

Appendix 7d – [Playback] patch, plays sample based on note generated by compositional system

Page 73: The Algotar Project

69

References Adams, T. (2010) 'David Cope: “You pushed the button and out came hundreds and thousands of sonatas” ', The Observer, available at: http://www.guardian.co.uk/technology/2010/jul/11/david-cope-computer-composer [Accessed April 10, 2011]. Ängeslevä, J. et al (2003) ‘Body Mnemonics’, Mobile HCI Conference, Udine, Italy. Bates, E., Furlong, D., & Dennehy, D. (2008). ‘Adapting polyphonic pickup technology for spatial

music performance’, Proceedings of the 2008 International Computer Music Conference, SARC Belfast.

Blitstein, R (2010), 'Triumph of the Cyborg Composer', Miller-McCune, available at:

http://www.miller-mccune.com/culture-society/triumph-of-the-cyborg-composer-8507/ [Accessed February 11, 2011].

Bongers, B. (2007) ‘Electronic musical Instruments: Experiences of a new Luthier’, Leonardo

Music Journal, pp.9–16. Bouillot, N. et al (2008) ‘A mobile wireless augmented guitar.’, International Conference on New

Interfaces for Musical Expression, Genova, Italy. Carfoot, G. (2006) 'Acoustic, Electric and Virtual Noise: The Cultural Identity of the Guitar', Leonardo music journal, Volume 16, p.35–39. Charles, J.F. (2008) ‘A tutorial on spectral sound processing using max/msp and jitter’, Computer

Music Journal, 32(3), pp.87–102. Cook, P. (2001) ‘Principles for designing computer music controllers’, Proceedings of the 2001 conference on New interfaces for musical expression. pp. 1–4. Cope, D. (2000) The Algorithmic Composer, Madison, Wisconsin. A-R Editions. Cope, D. (2001) Virtual Music : Computer Synthesis of Musical Style, Cambridge, Mass. MIT Press. Eno, B. & Chilvers, P. (2010) ‘Bloom - Generative Music - Creative apps for the iPhone and iPod touch’, generativemusic.com, available at: http://www.generativemusic.com/ [Accessed December 21, 2010]. Green, B. (2009) ‘Blog - seeyouinsleep.com - The work of Brian Green’, SeeYouInSleep, available

at: http://seeyouinsleep.com/ [Accessed October 8, 2010]. Grosse, D. (2008) 'Max 5 Guitar Processor', Cycling 74 Online, available at:

http://cycling74.com/2008/07/28/max-5-guitar-processor-part-1/ [Accessed January 11, 2011].

Jones, A. (2008) ‘Inventing the Future’, Boston Phoenix Online, available at:

http://thephoenix.com/boston/life/82943-inventing-the-future/ [Accessed October 8, 2010].

Page 74: The Algotar Project

70

Lähdeoja, O. (2008) ‘An approach to instrument augmentation: the electric guitar’, Proceedings of

the 2008 Conference on New Interfaces for Musical Expression (NIME08). Mclean, P. (2008) 'Apple outlines shift in strategy, rise in R&D spending, more', AppleInsider,

available at: http://www.appleinsider.com/articles/08/11/05/apple_outlines_shift_in_strategy_rise_in_rd_spending_more.html [Accessed April 3, 2011].

Millard, A. (2004) The Electric Guitar: A History of An American Icon, Baltimore: Johns Hopkins

University Press. Reboursière, L. et al (2010) ‘Multimodal Guitar: A Toolbox For Augmented Guitar Performances’,

Proc. NIME. p. 415–418. Roads, C. (1996) The Computer Music Tutorial, Cambridge, Mass. MIT Press. Saenz, A. (2009) 'Music Created by Learning Computer Getting Better', Singularity Hub, available

at: http://singularityhub.com/2009/10/09/music-created-by-learning-computer-getting-better/ [Accessed April 10, 2011].

Schloss, W.A. (2003) ‘Using contemporary technology in live performance: The dilemma of the

performer’, Journal of New Music Research, 32(3), pp.239–242. Wikipedia (2011) 'Pickup (music technology)', available at:

http://en.wikipedia.org/wiki/Pickup_(music_technology) [Accessed March 18, 2011]. World News (2010), 'Algorithmic Composition'. WorldNews.com, available at:

http://wn.com/algorithmic_composition [Accessed April 12, 2011].

Page 75: The Algotar Project

71

Bibliography Bevilacqua, F., Müller, R. & Schnell, N. (2005) ‘MnM: a Max/MSP mapping toolbox’,

Proceedings of the 2005 conference on New interfaces for musical expression. pp. 85–88. Bongers, B. (2000) ‘Physical interfaces in the electronic arts’, Trends in Gestural Control of Music,

pp.41–70. Burns, A.M. & Wanderley, M.M. (2006) ‘Visual methods for the retrieval of guitarist fingering’,

Proceedings of the 2006 conference on New interfaces for musical expression. pp. 196–199. Carrascal, J. (2008) ‘Look, ma, no wiimote!’ JP Carrascal Blog, available at:

http://jpcarrascal.com/blog/?p=202 [Accessed October 8, 2010]. Dannenberg, R.B. (2007) ‘New interfaces for popular music performance’, Proceedings of the 7th

international conference on New interfaces for musical expression. p. 135. Green, B. (2008) ‘YouTube - BrianWilliamGreen's Channel’, YouTube, available at:

http://www.youtube.com/user/BrianWilliamGreen [Accessed October 8, 2010]. Guaus, E. et al (2010) ‘A left hand gesture caption system for guitar based on capacitive sensors’,

Proceedings of NIME. pp. 238–243. Hindman, D. (2006) ‘Modal Kombat: Competition and Choreography in Synesthetic Musical

Performance’, Proceedings of the 2006 Conference on New Interfaces for Musical Expression (NIME06), Paris.

Hoadley, R. (2007) ‘Algorithms and Generative Music’, available at:

www.rhoadley.org/presentations [Accessed December 6, 2010]. Hunt, A., Wanderley, M.M. & Paradis, M. (2003) ‘The importance of parameter mapping in

electronic instrument design’, Journal of New Music Research, 32(4), pp.429–440. Orio, N., Schnell, N. & Wanderley, M.M. (2001) ‘Input devices for musical expression: borrowing

tools from HCI’, Proceedings of the 2001 conference on New interfaces for musical expression. pp. 1–4.

Overholt, D. et al. (2009) ‘A multimodal system for gesture recognition in interactive music

performance’, Computer Music Journal, 33(4), pp.69–82. Peretti, B.W. (2000) ‘Instruments of Desire: The Electric Guitar and the Shaping of Musical

Experience (review)’, Notes, 57(2), pp.418-420. Pinch, T.J., & Bijsterveld, K. (2003) ‘"Should One Applaud?"; Breaches and Boundaries in the

Reception of New Technology in Music’, Technology and Culture, 44(3), pp.536-559. Puckette, M. (2007) ‘The Theory and Technique of Electronic Music’, World Scientific Publishing

Company

Page 76: The Algotar Project

72

Purbrick, J. (2009). ‘The Creation Engine No. 2: An Open Source, Guitar Mounted, Multi Touch, Wireless, OSC Interface for Ableton Live’, available at: http://jimpurbrick.com/2009/12/17/open-source-guitar-mounted-multi-touch-wireless-osc-interface-ableton-live/ [Accessed October 8, 2010].

Salamon, J. & Nieto, U. (2008) ‘YouTube - G-Tar the Gesture Enhanced Guitar’, available at:

http://www.youtube.com/watch?v=RPHAXM7az1Q [Accessed October 8, 2010]. Tammen, H. (2007) ‘Endangered guitar’, Proceedings of the 7th international conference on New

interfaces for musical expression. p. 482. U.S. Dept. of Labor (2010) ‘Musicians, Singers, and Related Workers’, available at:

http://www.bls.gov/oco/ocos095.htm [Accessed November 15, 2010]. YouTube (2008) ‘YouTube - rayhan314's Channel’, available at: http://www.youtube.com/user/rayhan314#p/a/u/0/C_G8hwjuug0 [Accessed October 8, 2010]. Wikipedia (2010) ‘Generative music’, available at:

http://en.wikipedia.org/wiki/Generative_music#Biological.2Femergent [Accessed December 21, 2010].

Page 77: The Algotar Project

73