i. introduction - file · web viewsame time the word “computer” was being...

18
Musical Expression: How Computers Turn Amateurs into Mozart Leroy Ekechukwu RHET 243: E2 05/04/2010 I. Introduction It is an ever-present belief that music is one of the few accurate expressions of the soul. The composer, using simple note progressions or complex, overlapping synchronizations, can express any of the emotions humans can feel on any given day; happiness, love, anger, sadness, despair, all these emotions can flow richly from a piano, or slam gloriously from a drum, or twang brilliantly from a guitar. The composer has written a piece of himself into a musical piece, has encoded his emotions and thoughts within the notes that reach the listener. This is where the entire experience gets interesting. Though the composer has written a song with an emotion—specific or abstract, it doesn’t matter—in mind, the listener may have a completely different take on it. What might have been written in anger can express determination or ambition to a listener; what

Upload: letuyen

Post on 19-Mar-2018

213 views

Category:

Documents


1 download

TRANSCRIPT

Musical Expression: How Computers Turn Amateurs into MozartLeroy Ekechukwu

RHET 243: E205/04/2010

I. Introduction

It is an ever-present belief that music is one of the few accurate expressions of the soul.

The composer, using simple note progressions or complex, overlapping synchronizations, can

express any of the emotions humans can feel on any given day; happiness, love, anger, sadness,

despair, all these emotions can flow richly from a piano, or slam gloriously from a drum, or

twang brilliantly from a guitar. The composer has written a piece of himself into a musical

piece, has encoded his emotions and thoughts within the notes that reach the listener.

This is where the entire experience gets interesting. Though the composer has written a

song with an emotion—specific or abstract, it doesn’t matter—in mind, the listener may have a

completely different take on it. What might have been written in anger can express

determination or ambition to a listener; what might have been sadness might come across as a

deeper, more profound and abstract meditation on love. This is what makes music so powerful

and so timeless: the coding from the composer stays the same, but the interpretation can be

different. Therefore, the music becomes universal. It reaches the hearts and souls of more

individuals than it may have originally been encoded for, and is appreciated on a much grander

scale.

But it is hard to create music. That is, it is hard to create truly captivating music using

traditional means. To create music in the traditional sense, one must have adequate knowledge

of music theory, which is the study of the way sounds of different frequencies interact with

each other to make a pleasant or unpleasant sound. One must be proficient in at least one

instrument: the piano, the violin, the drums, the guitar, etc. But being proficient in one

instrument doesn’t cut it: the composer also has to have knowledge of how all the instruments

he is considering for his piece work together. If he does not, then he often must work with

other composers that might have the knowledge he doesn’t. These are just some general

requirements for writing good music in the traditional sense, and while there are more, lacking

in any of the aforementioned areas will result in a very difficult and alienating time creating

music.

This was the problem that many faced in music for a long time. They heard symphonies

from great composers like Bach, Mozart, etc., and great

improvisation from jazz musicians like Duke Ellington and

Louis Armstrong, and wanted to create music like them. But

to create music like them, they had to have been immersed in

it their entire lives, which had not happened. On top of that,

recording good, clean, and permanent copies of these songs

was very difficult to do. Something had to be done to facilitate the process, and that is where

computers came in.

In this essay, I want to examine briefly the history of how computers came to be such

important tools in music production. With this history in mind, I want to look—again briefly, as

this field of research is ever-expanding and will not fit in such a short essay—at exactly how

computers facilitate the music production process. After analyzing that, I’ll discuss how

Daft Punk, musicians that use computers to express themselves

computers have made people who otherwise would not have been musicians renowned locally

and even internationally. I’ll answer how computers have aided people as young as ten years

old in expressing themselves musically, and how it has facilitated the entire production process.

I’ll discuss briefly where computer music production is today and where it is headed in the

future, and as a bonus, I will share my experiences as a musician who uses the computer to

make his songs. Music is one of the most important aspects of human existence; computers

have helped this aspect grow at far faster rates than it did before the advent of technology.

II. A Brief History of Computers in Music

When the word “computer” is used, many often think of the most advanced technology

packed into the smallest of containers; that is, people think of computers as we know them

today. But the word “computer” was being used well before it became associated with the

technology available in gadgets like cell phones and handheld video

games. The earliest use of the word “computer” as it has come to be

known was at about the end of the nineteenth century. It did not

take long for composer to become interested in using computers for

musical purposes.

According to Aurelio de la Vega in “Regarding Electronic

Music,” “the interest of composers in producing music [by electronic

means] is as old as the invention of the vacuum tube” (3). The

vacuum tube was invented in the nineteenth century, which is the

The-Dream, a producer who uses computers to create his music

same time the word “computer” was being used similarly to its contemporary sense. Electronic

music, which is the same as music created with computers, “generally refers to music which is

composed directly on magnetic tape by electronic means” (de la Vega, 3). The interest was

there as early as the late 1800s, when inventors like Thomas Edison were already beginning to

try to understand how to use electricity to implement human functions.

It wasn’t until about 1945 that the first technique of composing electronically appeared.

Chronologically speaking, composing music with computers happened as Aurelio de la Vega

describes it in his essay:

pure musique concrete (appearing in France from 1945 to i952 in full force, and in later years incorporated as partial elements of full scale works where other sound producing means are used), which derives its sound materials from purely acoustical sources, such as pounded surfaces or railway station sounds…pure electronic music originating in the NWDR broadcasting studios in Cologne, Germany, around 1950…and rapidly dispersing, with all sorts of modifications, throughout Italy…(de la Vega, 4).

As early as the mid 1940s, the process of synthesizing was already being developed. Synthesis

essentially is the process of taking a signal, often a sine wave or something similar, and

manipulating its frequency, amplitude, phase, etc. to produce a new sound. It is fascinating

that composers were going above and beyond this with every day sounds, which contain more

complex components than sine waves. This was a step closer, but a device that more closely

resembled the modern computer needed to be created in order for the major advancements

to occur.

CSRIAC was the answer to that call. Dubbed the “Electronic Computer Project,”

“Trevor Pearcey and Maston Beard officially began [the project] in 1947” (Doornbusch, 11).

CSRIAC was one of the first computers to be created in the world, and its specific purpose was

to process data and channel it into different sources. Some of the sources were “a memory

location, a register, the paper tape punch, or the loudspeaker” (Doornbusch, 12). The

loudspeaker is important in that mathematical sequences and programs would output their

results in the form of sound, which counts as an early form of synthesizer!

More computers like CSRIAC were built, with

processing speeds that something as small as an

iPod dwarfs. It was hard to “accomplish a stable,

pitched sound” (Doornbusch, 13) because of how

slow the computers were and how much power they

needed to run. But, over time as many know,

computers became more advanced and more

required less power to run at higher speeds. Composers continued throughout the years to

produce sounds using these large computers, their work becoming slightly easier as the

computers got slightly more advanced. It was not until the late 1980s that computers were

advanced enough to be used reliably in making music. The first form music computers took

was that of the modern synthesizer in 1988.

III. Hardware and Software Synthesizers: Their Inner Workings

The evolution of computers and their use in musical composition gets more interesting

around 1988 because such computers now have a definitive name: synthesizer. These machines

enabled musicians to process signals and manipulate them into varying sounds. This kind of

CSRIAC

process became the foundation for such genres of music as techno, pop, and, to a certain

extent, hip hop. Even percussive instruments like drums and xylophones could be effectively

simulated with synthesizers, thus making music composition accessible to anyone who could

afford it.

While synthesizers, or computer instruments, make it easier for the layman to create

music, the way they actually work is another story. The challenges associated with computers

being able to effectively simulate real music are articulated in the following:

Computer cognition of music actually involves four unique problems. First, how will music be measured to provide input information to the computer system? Second, how will that information be presented to the computer? Third, how will it be represented in the computer program in such a way that the program can, in some way, come to some understanding of its meaning? And finally, what will the computer do with this knowledge (Dobrian, 2)?

The answer in its entirety is well beyond the scope of this essay; in fact, it is an ever

expanding answer, one that continues to innovate the way musicians use computers to

express themselves. As technology continues to evolve, the methods of musical creation

continue to become easier and also more flexible.

To put it briefly, computer scientists in

conjunction with musicians create complex

programs that simulate the traditional means

of musical production. The way to understand

it best is like this: “By complex calculations

performable only by computers, one can give

the illusion of recorded sound flying about through space” (Dobrian, 6). This is

implemented by programmers with the use of very complex algorithms that manipulate

Moog Hardware Synthesizer

the sinusoidal components of notes to create familiar or even unfamiliar sounds. Once

this function is fully implemented, it is only a question of creating an interface that will

look familiar to a musician. So, instead of looking at the complex calculations the

computer is making, the musician can instead see things like a virtual piano, guitar, or

whatever the mind is capable of dreaming up. Such enormous advances in technology

have made it so that, today, even an inexperienced layman can create something

musical.

IV. Computer Musicians Today

Because of the ease of creating music using computers today, even average people can

sound like a modern day Chopin. There is a common misconception, however, that computers

have addressed all of the barriers that prevent anyone

from creating music. As Chris Dobrian puts it, “While it is

unlikely that computers will help people become virtuosi

without practicing (although the possibility may one day

warrant consideration), many admirable attempts have

been made to reduce the tedium…” (6). The fact of the

matter is music is quite hard to create without at least

having some basic knowledge of how to do it. But, as noted above, there have been attempts to

lower the requirements of being successful in music production. Today, musicians that are

exceptionally skilled can create what are called loops. Loops, in essence, are premade musical

progressions that a layperson can then manipulate to his use, thereby creating something that

FL Studio, a digital workstation complete with synthesizers, piano rolls, and instrument sounds

is original. Things like loops address the need to know a little something about music because

now one doesn’t have to know what buttons to push or what notes to string together to make a

great-sounding song. They only need to string enough loops together in a way that sounds

logical, and a song has been created almost from scratch.

But the tradition is for many young producers is to actually learn a thing or two about

musical production within the program that they wish to use. Sebastian Hill (or Dr. S, where S is

the treble clef symbol), a producer with whom I frequently collaborate put it this way: “It’s very

easy to use premade loops when making a beat, but it’s better to actually know what notes

you’re hitting and what instruments you’re using.” Today’s musical production programs have

features that make this very simple. One of the most important tools is the Piano Roll, which

“send[s] note and automation data to plugin

instruments associated with the Piano Roll’s

channel” (FL Studio Reference Manual). What that

means essentially is that a musician can pick any

instrument they want that comes with the program

they are using, hit a note, and the instrument will

play that note. Then they can “write” a melody

almost the same way a traditional composer might write it out on paper. The beauty of it is that

the producer does not have to be able to read the notes: the computer does it all for him! It will

read the notes the producer has recorded and play them, all without the producer having to go

back and read what he’s written in order to play.

Ableton Live, another example of a digital workstation

Features like the Piano Roll have aided countless contemporary musicians in their

expression. Talents like Kanye West, The-Dream, Daft Punk, Tiesto, and many more have used

computers to make their music. Daft Punk and Tiesto are also examples of people who not only

use the computer to make their music, but also use it to perform them live. Technically

speaking, all one really needs is a computer to be internationally recognized as a great

musician. This might sound like a bold statement to make, but technology has advanced so

rapidly that what was once a complex art has been made much, much simpler.

The computer is also a great tool for up and coming composers and musicians. I myself

use the computer to create music, record lyrics, and DJ. I do all of this with the aid of such

musical programs as FL Studio, Reason, Sony ACID, Sony Sound Forge, Ableton Live—all used

for musical production—and Serato ITCH—used for DJing live. They all have very different

interfaces (some of which are pictured throughout this essay), but they all boil down to the

same process of creative expression: I pick the sounds I

want to use, create melodies and harmonies for each

sound, and build them on top of each other to create

what is perceived as music. With Serato ITCH, I pick

today’s biggest hits—as well as some obscure songs—and

mix them seamlessly together, expressing myself to the

audience and taking them on an emotional journey only

music can create. While I have the ability to create music the traditional way (having learned all

the necessary things by playing the drums and the piano), I choose to use the computer

because of the countless possibilities it offers me. Now, through the computer, I can play the

Serato ITCH, the latest program to enable people to DJ with or without turntables

drums, the piano, the trumpet, the violin, the cello, the organ, and I can even manipulate them

to sound like whatever I can imagine. I am not alone in this: hundreds, if not thousands of

people use the computer to create music the same way I do.

V. Conclusion: The Future of Computer Music

The future might be uncertain in many other areas of life, but not with the computer’s

relevance to music. Turn on the radio today to any station playing hits and hip hop, and you will

hear music that could not have been created with anything else but a computer. Listen to the

instruments and how strange, how futuristic they sound. Listen to the vocalist and how their

voice sounds larger than life, robotic, or doubled (the term for that is “dubbed”). All of these

things are done with the aid of a computer, and the

trend is certain to continue into the future.

I say this because of how quickly (in the grand

scheme of things) the computer became a staple in

every production studio and every home. Programs

designed to make even a ten year old boy sound like

Mozart sprung up almost overnight if one really takes the time to think about it. And the public

loves it. They love the processed sounds and the futuristic qualities music is taking on these

days, which only means they will want more. And, as younger generations learn how easy it was

for their ancestors to become famous with only a hard drive, screen, keyboard, and mouse,

they will want to follow in their footsteps. As Anthony Swift (alias Swift Beatz), another

producer I know, put it, “It can only get easier from here. With computers, whatever we

Tiesto

imagine becomes reality in a split second.” He’s quite right. Musicians around the world will

only need think of an idea, and a computer will implement it. Musical expression will expand at

an incredible rate as the years go by and computers become more powerful, and that is a

future to which I look forward.

Works Cited

Aikin, Jim. Software Synthesizers: the Definitive Guide to Virtual Musical Instruments. SanFrancisco: Backbeat, 2003. Print.

De La Vega, Aurelio. “Regarding Electronic Music.” Tempo 1.75 (1965): 2-11. JSTOR. Web. 08Apr. 2010. <http://www.jstor.org/stable/943392>.

Dobrian, Chris. Music and Artificial Intelligence. Rep. Web. 15 Apr. 2010.<http://music.arts.uci.edu/dobrian/CD.music.ai.htm>.

Dobrian, Chris. Music Programming. Web. 15 Apr. 2010.<http://music.arts.uci.edu/dobrian/CD.MusicProgramming.htm>.

Dodge, Charles, and Thomas A. Jerse. Computer Music: Synthesis, Composition, andPerformance. New York: Schirmer, 1997. Print.

Doornbusch, Paul. “Computer Sound Synthesis in 1951: The Music of CSIRAC.” Computer MusicJournal 28.1 (2004): 10-25. Print.

"FL Studio 9 Reference Manual." FL Studio Homepage. Web. 28 Apr. 2010.<http://flstudio.image-line.com/help/Index_Frame_Left.htm>.

Hill, Sebastian, and Anthony Swift. "Producing with Computers: An Interview with Sebastian Hilland Anthony Swift." E-mail interview. 25 Apr. 2010.

Theodore, Michael. “Propellerheads ReCycle!, ReBirth, and Reason Audio Software.” ComputerMusic Journal 27.2 (2003): 121-26. JSTOR. Web. 08 Apr. 2010.<http://www.jstor.org/stable/3681622>.

Miranda, Eduardo Reck. Readings in Music and Artificial Intelligence. Amsterdam: Harwood Academic,

1999. Print.

Wessel, David, and Matthew Wright. “Problems and Prospects for Intimate Musical Control ofComputers.” Computer Music Journal 26.3 (2002): 11-22. Print.