artificial music

27
Artificial Music A review of the use of Artificial Intelligence and Artificial Life in Music By Dr. Jonathan P. Wakefield Department of Engineering and Technology School of Computing and Engineering University of Huddersfield

Upload: don

Post on 11-Jan-2016

21 views

Category:

Documents


0 download

DESCRIPTION

Artificial Music. A review of the use of Artificial Intelligence and Artificial Life in Music By Dr. Jonathan P. Wakefield Department of Engineering and Technology School of Computing and Engineering University of Huddersfield. Brainwaves. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Artificial Music

Artificial Music

A review of the use of Artificial Intelligence and Artificial Life in Music

By

Dr. Jonathan P. Wakefield

Department of Engineering and Technology

School of Computing and Engineering

University of Huddersfield

Page 2: Artificial Music

Brainwaves

• Ideally would like a machine that can convert imagined music into audio

• IBVA (Interactive Brainwaves Visual Analyser) is a system that can map certain EEG (electroencephalogram) signals to specific musical actions. Need to attach electrodes on performers scalp. User has to learn how to make their brain give off the right electrical patterns to trigger the desired musical events

Page 3: Artificial Music

Sound Design – SST design (1)

• Ricardo A. Garcia has undertaken work in automatically designing Sound Synthesis Techniques (SSTs)

• Basically he has a target sound he wants to synthesise

• Views design as a search of a huge multi-dimensional SST space

• Work is at level of proof of concept

Page 4: Artificial Music

Sound Design – SST design (2)

How does it work?

1. Produce a population of random topologies

2. Then uses mathematical optimisation techniques to determine parameters e.g. filter cutoff

3. Each candidate solution is evaluated using a fitness function (error metric)

4. If you have a good solution then FINISH otherwise allow best solutions to reproduce and mutate and using the new population of candidate topologies go back to step 2.

Page 5: Artificial Music

Sound Design – Exploring Sound Space (1)

• Hardware and software synths are generally hardwired with a particular SST e.g. subtractive, additive, physical modelling …

• To generate useful and interesting sounds with a new SST a user has to go through a learning curve

• James Mandelis has addressed this problem with his Genophone hyperinstrument – it allows users to perform sound design without understanding the underlying form of synthesis

• Works at the level of System Exclusive messages

Page 6: Artificial Music

Sound Design – Exploring Sound Space (2)

• How does it work?1. Start with population of hand crafted sounds2. User evaluates each sound (parameter set)3. User then selects which sounds (parameter

sets) s/he wants to use as parents4. Selected parents generate new sounds

(parameter sets) by reproduction and mutation

5. Repeat from 2 until happy with sound(s)

Page 7: Artificial Music

Performance Mappings (1)

• Controller assignments also need knowledge of SST to map controls to useful combinations of SST parameters

• Mandelis’ Genophone also allows the evolution of performance mappings

• This is carried out in the same way as the evolution of synthesis parameters and carried out at the same time

• Uses a data glove with five flex sensors – one for each finger.

Page 8: Artificial Music

Performance Mappings (2)

• These are interfaced to 5 control knobs on a Korg Prophecy synth to make realtime changes to sound

• Each controller can control 4 parameters• This allows local exploration of the

soundspace with the previous “Sound Design” stage allowing global exploration of the soundspace

Page 9: Artificial Music

Computer-based DJ (1)

• Dave Cliff of Hewlett Packard ) has developed a DJ computer system that sequences (i.e. chooses tracks and in what order) and mixes (i.e. beat matches and crossfades)

• In 2000 played off against a DJ in a club for New Scientist (45 out of 72 clubbers spotted the computer DJ, none of the DJ judges were fooled)

Page 10: Artificial Music

Computer-based DJ (2)

• In 2001 made a more sophisticated version• Clubbers wear wristwatches to provide feedback.

Monitor their location, heart rate, perspiration and an accelerometer monitors activity and communicate to computer via bluetooth

• Splits songs into individual tracks eg. drums, bass, vocals, keyboard hooks.

• HPDJ picks individual tracks and overlays them. • Uses GA to evolve good music with clubbers

providing fitness function.

Page 11: Artificial Music

Composition – ATNs (1)

• David Cope has a piece of software called EMI (Experiments in Musical Intelligence)

• It is derived from Mozart’s Dice Game but is much more advanced

• Most importantly it doesn’t have a single fixed phrase template

• Uses ATN = Augmented Transition Networks - a technique used in natural language processing for representing a formal grammar

Page 12: Artificial Music

Composition – ATNs (2)

• How does it work?– Human decides on a set of example pieces for EMI to

analyse

– EMI searches through these pieces using a pattern-matcher to find recurring templates of significant length.

– EMI also builds up lists of all the alternative fragments which can fit each slot in a template.

– EMI uses ATNs to specify order in which slots and templates may be positioned

Page 13: Artificial Music

Composition – ATNs (3)

– The ATNs represent valid musical sequences in a particular style and are used to generate music in that style

– Final stage is pattern matcher which extracts signatures from examples and then adds signatures to generated pieces.

Page 14: Artificial Music

Composition – ATNs (4)

• Does it work?– Produces convincing pieces in a composers

style– Compared to a lesser human composer trying to

mimic a master– Cope says “usually lacks the true spark of

genius”– Requires human intervention in analysis stage

and in filtering compositions

Page 15: Artificial Music

Composition – Markov Chains (1)

• Markov Chains are good at representing short term musical patterns

• But have problems generating convincing complete pieces

• Continuator, developed by Francois Pachet exploits Markov chains’ good points whilst avoiding its bad points

• Continuator is an interactive composition instrument

Page 16: Artificial Music

Composition – Markov Chains (2)

• Musician organises pieces high level structure while Continuator “fills in the gaps”

• Bit like a much more advanced version of an arpeggiator or auto-accompaniment system

• Automatically learns and imitates of musical styles and the music it generates is stylistically consistent

• But it is also a new kind of “instrument” that can be played by a musician/composer and adapts quickly to changes in rhythm, harmony or style

Page 17: Artificial Music

Composition – Markov Chains (3)

• How does it work?– Continuator receives MIDI from musician– It segments MIDI into phrases– Analyses phrases and builds up Markov model– At same time, after a phrase is played in by the

musician, the continuator generates a continuation based upon the Markov model

– The generated continuation is output as MIDI to a synth

Page 18: Artificial Music

Interactive Composition – Markov Chains (1)

• Instead of just using learnt Markov probabilities to decide which of alternative continuations to play can take into account notes currently being played to take account of harmony

• Prob(x) = S*MarkovProb(x) + (1-S)*(NoNotesInLast8 / 8)

• Varying S from 1 (automaton) to 0 (probability totally based on input) gives different output

• User can vary S during a performance along with switches to switch of learning or continuation

Page 19: Artificial Music

Interactive Composition – Markov Chains (2)

• Does it work?– Can produce a stream of notes where it is

usually not possible to tell what was played by the user and what was played by the Continuator

– Aha effect when musicians hear it echoing back something they played earlier or realising it is starting to play in their style

– Claimed to work with different styles

Page 20: Artificial Music

Composition – Cultural Approach (1)

• Can evolve music using GAs using human as fitness function but this is very time consuming – can replace with a computer critic but this hasn’t been very successful so far

• Cultural approach uses GAs and individuals socially interact with their music

• Note: Music is meaningful to their world but not necessarily ours

• Agents produce music which is evaluated by other agents

Page 21: Artificial Music

Composition – Cultural Approach (2)

• Todd and Werner – coevolved male composers and female critics

• Composers have 32 note tune (from 2 octaves)

• Critics have expectations encoded as 1st order Markov chain

• Surprise scoring method seems to work best

Page 22: Artificial Music

Composition – Cultural Approach (3)

• How does it work?– Composers initialised with random tunes– Critics initialised with folk-tune melodies– Each critic listens to a number of randomly selected

composers and selects one to mate with based on her Markov chain

– Mate (and mutate) to produce one new child per pair with randomly chosen sex

– Randomly kill off a third of population to return it to previous size

Page 23: Artificial Music

Composition – Cultural Approach (4)

• Eduardo Miranda has developed a mimetic model

• Each agent stores its sound repertoire and other parameters in memory

• Overtime the society builds up a repertoire of common musical phrases

Page 24: Artificial Music

Composition – Cultural Approach (5)

• How does it work?– At each round agents pair up and …– First agent plays a randomly chosen tune from its

repertoire (if rep is empty plays random tune)– Second agent finds most similar tune in its rep– First agent then compares the returned tune to its rep. – If original tune is most similar then second agent will

reinforce the existence of the tune it sent out and also try to modify it to be more like original

– Else the imitation fails

Page 25: Artificial Music

Computer critic (1)

• Hit Song Science is a piece of software by Polyphonic HMI of Barcelona that can determine whether a song is likely to be a hit record

• Software looks for underlying mathematical patterns in music

• Use a hit database of 3.5 million songs from last 50 years.

• Songs with similar patterns in melody, harmony, chord progression, brilliance, noise, fullness of sound, beat, tempo, rhythm, octave, and pitch are close to each other in the “Music Universe”.

Page 26: Artificial Music

Computer critic (2)

• They say that if you look at songs from just last 5 years, they are clustered into a limited number of small groups spread across the “universe”.

• If you want a hit you need to position your song in one of the clusters.  

• What about somebody new and original? The next big thing? Using the above just makes all music end up being the same? They say this is NOT true

• They say that a good score HSS is only one part of having a hit track.The other 2 are: a song must sound good to humans and be well promoted

Page 27: Artificial Music

Track Recognition - Shazam

• Proprietary pattern recognition technology (patent-pending) that can identify recorded audio even under noisy conditions (in 30 seconds) and send song and artist back as SMS message.

• Database contains over 1.7 million tracks, and is growing with another 5,000 or so every week, covers UK and German markets.

• Taken over a million calls in less than 9 months.