introduction to digital sound design: week two study notes
TRANSCRIPT
7/30/2019 Introduction to digital sound design: Week Two Study Notes
http://slidepdf.com/reader/full/introduction-to-digital-sound-design-week-two-study-notes 1/23
Week 2
We've talked about how sound works, with the sound pressure waves that move the air molecules
back and forth through a compression, and then rarefaction where the sound molecules hit our ear
drums. The ear drums capture that energy and then move it into the cochlea within the ear, where
we have fluid that vibrates the basilar membrane which starts to resonate with the various ratios of
vibrations which then sends electrical signals to our brain, and we hear sound.
So let's talk about how we're going to capture that same physical energy into an electronic device to
convert that in-electrical energy into an analogue of electrical energy. This electrical energy will then
allow us to send it to through cables, into a mixer, and eventually we'll convert that into a digital
representation, which would be how the computer records that same information.
Microphones
The way that we convert physical energy into electrical energy is
through some kind of transduction. Most of the transducers that
we are familiar with are called microphones.
How do microphones work?
There are many different design microphones but I want to talk
and focus on two primarily today.
Dynamic Microphone
The dynamic microphone is the most common and durable type
of microphone that you can find.
It works under the following principle:
A dynamic microphone has membrane covering
the top of the microphone; usually this is going
to be polyvinylchloride or some other kind of a
thin material. This membrane, or diaphragm,
will move according to the air pressure. The
sound pressure waves that hit it will start tomove that diaphragm. Connected to the
diaphragm is a copper coil, surrounded by a
magnet, which has a positive and negative
charge on each side.
As the diaphragm begins to move, that coil of copper moves as well, and as the copper moves closer
within that magnetic field, it will then send a positive and negative energy electrical charge out of
the cable.
What we've got is a conversion of the electrical energy or the acoustic energy, into an electrical
signal, which will then convert this into patterns of alternating current.
7/30/2019 Introduction to digital sound design: Week Two Study Notes
http://slidepdf.com/reader/full/introduction-to-digital-sound-design-week-two-study-notes 2/23
In electricity there are two types of current: direct and alternating. Sound cannot be represented in
direct current. We have to have something that alternates, to be an analogue for the alternations of
compression energy in the air wave formed.
This microphone principle is also the principle used in speakers, it is just the inverse - you have a
cone which is moving, and you have a moving coil around a magnet, and then as the electrical
energy moves that magnet and coil, it moves the diaphragm, which moves the air molecules again,
and we hear the sound.
Condenser Microphone
The condenser microphone doesn’t have a magnet to
give you the polarized electrical patterns. You have
two plates at the top of the microphone, and one of
them is going to be charged to a positive electrical
energy (plates are often made of platinum or gold).The other one will be negative, and the difference
between those two is called capacitance. As the air
pressure pushes down on the top plate, it pushes it
closer to the, the bottom plate which that then creates a capacitance difference. Over time that
distance is then captured and sent out to our cable, and we get both the positive and negative
electrical energy going out to the alternating current again.
Although this microphone is more delicate and fragile, it records a much higher quality sound. But
the trick with this mic is pre-charging the plates. It can be
done in various ways:
One method is to actually have a battery on board
with either a nine volt battery or another kind of NiCd
battery.
Another method is called “Phantom Power”
Phantom power was first developed by the German
microphone company called Neumann, back in 1966. They
were working with condenser technology and along with the
Norwegian Broadcasting Company; they came up with a
standard that is 48 volt phantom power.
Phantom power is used to charge the 2 plates by plugging the microphone cable into a phantom
power source, so that the charge can be sent through the cable to the mic. This is usually an external
power source, but most mixers come with a phantom power source built in anyway, that can be
used if you are connecting a condenser microphone to your mixer.
7/30/2019 Introduction to digital sound design: Week Two Study Notes
http://slidepdf.com/reader/full/introduction-to-digital-sound-design-week-two-study-notes 3/23
Variables of microphones:
There are important things to consider when deciding on which microphone to use.
The technology of dynamic versus condenser.
Dynamic you'll find more in sound reinforcement, and condenser you'll find more in studio recording
applications, but you can find both in, in either.
Polar Pattern
The polar pattern is the microphone’s sensitivity towards the direction of sound coming into the
microphone. There are a few different types of polar patterns:
Omnidirectional
microphone – sound can
be received from all 360degrees ate an equal
sensitivity.
Boundary microphone -
a boundary microphone
is an omnidirectional
microphone that can be
placed, literally at a
boundary, but also has
very good reach
capabilities.
Cardioid pattern – a cardioid will give you sound in front of the microphone and some of the
side energy coming in.
Hyper cardioid or super cardioid - what those do is narrow the energy field more as you get
in front of the microphone. Usually the hyper and super cardioids will allow some energy
directly behind to come into that.
Specialised figures:
Figure 8 - A figure eight is a sort of a double cardioid. What it does is allow for sound energy,
to come in the front of the microphone and also equal energy to come in behind the
microphone. It is a very powerful figure used in recording.
7/30/2019 Introduction to digital sound design: Week Two Study Notes
http://slidepdf.com/reader/full/introduction-to-digital-sound-design-week-two-study-notes 4/23
Shotgun microphone – this is often used in video or film applications and is basically a very
narrow cardioid mic, and only allows receiving from a very small range.
Depending on what you're recording or your sound application, you would choose one of those
cardioid patterns. In order to have a versatile microphone and the more expensive microphones will
give you this shifting.
7/30/2019 Introduction to digital sound design: Week Two Study Notes
http://slidepdf.com/reader/full/introduction-to-digital-sound-design-week-two-study-notes 5/23
Microphones part 2
Another aspect of microphones that's important for us to understand is the frequency response of
microphones. We talked about the two designs, the dynamic and the condenser microphones as
being important. One for primarily used for sound reinforcement - the dynamic, and then the
condenser, primary found in studio recordings. The reason that those are done that way is not only
the design but also the amount of money you pay to get a good frequency response.
Remember we talked early on about the human ear can hear frequencies from about twenty hertz,
twenty vibrations per second up to around eighteen to 20,000 hertz. So when we have a
microphone we need to have something that can pick up the energy of that same frequency range.
A really good microphone, usually costing a little bit more money, will give you a frequency response
of at least twenty up to 20,000 hertz. Now, as you come down in price range, and design, you will
find microphones which have a narrower range than that.
If you don't have a microphone that can represent the sound energy you won’t hear any direct
difference of the kind of sounds it can record other than a slight timbral change. Timbral is the result
of all the vibrations of frequencies that are going above the fundamental, so all of those, up to
20,000 hertz affect the colour of the sound that we hear.
Some expansive mic’s mays also give you the option of changing the sensitivity and range at which it
records. If you're recording in a live setting, many times in halls you will have air conditioning sound
that will go through the vents. Usually those are around 75 up to 150 hertz. They're very low
frequencies. A microphone may allow you to reduce the sensitivity of that range so you don't pick up
the sound of the air conditioning etc.
Cable Systems and Impedance
Impedance is the amount of resistance the electrical energy faces, leaving the microphone.
Balanced cable systems
A balanced cable system can be identified as having a 3 pin connecter, connected to what is
standardly called a microphone cable, or XLR cable. (This kind of microphone cable is going to be
used in most of your applications for recording and sound reinforcement.)
Each microphone cable applies a resistance to the electrical energy leaving the mic. This resistance
is usually 600 ohm. If the symbol “lo z” or “lo – z” is on the mic, it means that it is a low impedance
microphone, and has a smaller resistance.
The good thing about low Z impedance microphones is that it will travel long distances because
there's very little resistance. The down side of that is that, if you are traveling a long distance with
very little resistance or ohms, in the cabling, you can easily pick up other radio frequencies that are
in the space or in the air.
So, along with the positive and negative pin on the mic, you have to have this third pin that acts as
shield that keeps out other signals.
7/30/2019 Introduction to digital sound design: Week Two Study Notes
http://slidepdf.com/reader/full/introduction-to-digital-sound-design-week-two-study-notes 6/23
Unbalanced cabling systems
An unbalanced cabling system consists of a 2 pin connector, connected to a standard microphone
cable. This specific type of microphone is called a high impedance microphone. These are used only
in recording studios; you are not ever going to be running it in a sound reinforcement situation.
The high impedance generally runs about 50,000 ohms, or 50 kilo-ohms, giving you a lot more
resistance that's built into the signal. This means that it's going to be very protective of any kind of
outside interference coming into it, but, you can only run it for a short distance.
7/30/2019 Introduction to digital sound design: Week Two Study Notes
http://slidepdf.com/reader/full/introduction-to-digital-sound-design-week-two-study-notes 7/23
Mixer Basics
Now that we understand how sound energy is converted into electrical energy through the
microphones, through various kinds of transduction systems, then it moves through a cable,
with various kinds of connectors and then what's going to happen is all those microphonesare going to go into something called a mixer.
The mixer is where we're going to adjust the levels, we can change some of the
amplification of certain frequency ranges, and we can basically add effects to the sound at
that point. The mixer is where everything sort of happens in the piece.
Components of a mixer:
1) Input channels
7/30/2019 Introduction to digital sound design: Week Two Study Notes
http://slidepdf.com/reader/full/introduction-to-digital-sound-design-week-two-study-notes 8/23
There are multiple channels of information, coming from microphones, guitars, synthesizers
etc. all plugged in the top row, of inputs. You can see here that we have the XLR Connector
for the microphones. And then we see a place for the quarter inch phone plugs. Then here
we see the RCA plugs for a CD player or an iPod or anything else that we would plug in here.
2) Stereo device input channels
In channels five -six, seven - eight, nine - ten, eleven - twelve, you will see we only have one
output volume, down here. This is because these are designed so that we can plug in any
kind of corded phone cord that already has a stereo output on it. E.g. a stereo system, a CD
player, an iPod, a synthesizer, anything that is generating stereo information.
3) Output channels
Usually on the back of a mixer, you will see, this one has a two channel output. So
everything that plugs into the top of the microphone we would then be mixing into a stereo,
a left and a right channel output. And here again you see this is another XLR output so we
can send this to our recording device if we want to record to the computer; record, send it
to speakers for, sound reinforcement.
So the purpose of the output channels would be taking the final mixed signal and into the
output.
4) Power Switch and Power Input
Also at the back of the mixer.
5) Phantom Power Switch
Most mixers today will have a phantom power source on them. If you have a condenser
microphone plugged into one of these four microphone inputs, you will need to turn on the
phantom power for, for those microphones. Phantom power only goes through these first
four channels of inputs. In case you have a condenser microphone that needs to be charged
up once you're done.
6) Pre-amplifier
The electrical energy leaving the mic is usually very low - so the first thing you have to do is
to run it through a preamplifier. The preamplifier is going to raise that signal up high enough
that we can then actually manipulate it through the mixer.
The very first knob we see at the top here, determines how much energy of Pre-
amplification we apply to the signal coming in. You want to get the correct balance between
pre-amplification and the overall amplitude. If the preamplifier is too high, the preamp will
7/30/2019 Introduction to digital sound design: Week Two Study Notes
http://slidepdf.com/reader/full/introduction-to-digital-sound-design-week-two-study-notes 9/23
over ride and the sound will distort. If it is too low, not enough energy will be sent through,
and you will end up turning the output level up high, which will also distort the sound.
7) Equaliser(s)
These blue knobs on the mixer above are effecting what is called equalization. Equalizationallows us to divide the frequency spectrum. (20 – 20 000 hertz) An equalizer allows us to
amplify or attenuate various ranges of that spectrum. The simplest would be on a device
that has tone controls - a base knob and a treble knob. It allows you to then take a very wide
band of frequency spectrum, and manipulate the upper and lower partials, changing the
timbre of the sound.
Above, we have a three pole equalizer: In this one we have a low frequency knob which is
set at a mid-range of about 80 hertz. Above that, we see a mid-range, which is 2500 hertz,
and then the upper one going to be about 12,000 hertz.
Some mixers give you the ability to change the mid-range of each equaliser pin, making the
frequency range it changes higher or lower.
Types of equalisers:
Parametric equalizer Graphic equalizer
Equalisers are critical when doing sound reinforcement, because in sound reinforcement
you're going to have to be dealing with what's called feedback. Feedback is the returning or
bouncing of sound. This will be different in each situation that you're working in; because
they will all have a different acoustic mapping.
So, when a sound engineer comes in to set up a sound system, they have to be checking the
room to see where the frequencies that get over-amplified in that particular room are.
Every room is different, so they will analyse, tale a spectrum analyser and will run what is
called pink noise. A pink noise generator which sends all frequencies into the sound system,
and then they measure the output to see, are there places where certain frequencies are
getting amplified because of the acoustics of the room. What the sound engineer would do
is measure those points and then on the parametric equalizer they would specify those
frequency points and then reduce those down so you end up with a flat response.
7/30/2019 Introduction to digital sound design: Week Two Study Notes
http://slidepdf.com/reader/full/introduction-to-digital-sound-design-week-two-study-notes 10/23
8) Pan
Pan is the next knob under the equalizer, and is very important in a stereo. If a microphone
coming in and I want to send it to the left channel, than I take my pan knob and move it to
the left and it would only come out the left channel. If I move it to the right, then it only
could at the right.
So, why you would be doing this in sound reinforcement? For example: Let say you've got
musicians on stage, and you've got speakers on either side of the stage. Let’s say the
guitarist a little to the right side of the stage. Then when you create a sonic image, you
would probably move the panning’s slightly to the right so that when the audience is
hearing that guitar player, the sound is actually coming from speaker more from the right,
making it actually looks like the sound is coming directly from the musician.
7/30/2019 Introduction to digital sound design: Week Two Study Notes
http://slidepdf.com/reader/full/introduction-to-digital-sound-design-week-two-study-notes 11/23
Mixer Basics part 2
9) Auxiliary sends (effects sends)
On our mixer, there are two extra knobs above the equalizer – these are the auxiliary control units.
These units send the sound to the auxiliary output control area. This auxiliary output is divided into:the stereo, auxiliary return and auxiliary send; number one and two.
What you can do is:
We can then take an instrument plugged into channel one, and send that information to axillary
output 1 or 2. I control the gain and that it will get mixed into auxiliary send channel one – in the
auxiliary control area. From there, I can then send that to somewhere else - a separate equalizer, a
monitor, speaker, another mixer, flange, reverb, delays, compressors, expanders etc.
And then coming off of the hardware, you would bring the output of that back into the auxiliary
returns and then it would get mixed in with the output signal.
For sound reinforcement purposes, the auxiliary send will have multiple purposes: you may also go
to a reverb unit or some other kind of effects. But primarily, your auxiliary sends are going to be
sending this particular information to monitors on stage.
The normal sound reinforcement configuration
You have your stage speakers, facing the audience. Usually they're on the sides of the performers,
not behind the performers, because you're going to get a potential feedback. The musicians also
need to be able to hear each other on stage, so often you will see another set of speakers; it's awhole other configuration of speakers that are pointing back toward the musicians, so that they can
hear each other on stage. They don't need to be hearing what the audience is hearing, but they need
to be able to hear, specifically what they need to, to be able to play together and that's called the
monitor system.
Today, more often than not, you'll find wireless monitors, which are being plugged into a little
earphone in their ear, and you don't have to have these big banks of speakers down on the floor. But
it's the same principle, where that auxiliary send one, that keyboard is being sent to the vocalist's
monitor through our auxiliary send channel, so that they can hear that stronger and the bass and all
the other instruments will be reduced.
So that's why, our auxiliary sends are critical, for being able to have a group play together.
I mentioned in sound reinforcement, there are three different systems often in play:
So when a sound engineer is mixing the sound reinforcement, you're doing the stage speakers,
which are trying to balance and make sure you don't get feedback. You're trying to get the right
sound mixing throughout the whole hall. Then you're trying to get these multiple monitors, all
having the right level of mix, going back to the musicians so they can play together. And many times,
you'll have a third level of amplification, which are your guitar amplifiers.
7/30/2019 Introduction to digital sound design: Week Two Study Notes
http://slidepdf.com/reader/full/introduction-to-digital-sound-design-week-two-study-notes 12/23
Often you will see a guitarist who prefers to have their particular Fender amp, or whatever design
they want, and often the sound engineer has to put a microphone in front of that amplifier, because
they can't just plug directly into the mixer, because they want that particular quality of that amplifier
sound into the sound of the guitar. So then you have to transfer the sound of that amplifier, that
Fender guitar amplifier, record it, and mix it correctly into that. So in some ways the way we do
sound reinforcement is a very sophisticated and sort of multiple problems can emerge on that; and
so a good sound engineer sort of understands all those three different systems and how they have
to operate together, and so the mixing console is where all of this is going to happen.
10) Potentiometer
Down at the bottom of each of these channels, you'll always have the same kind of information, a
potentiometer. Larger mixers will have a slide fader that does the same thing. It amplifies or
attenuates the electrical signal - what we call the volume control or the final gain.
So remember the pre-amp gain does the same thing, it controls how much signal gets pre-amplified,and then the potentiometer is going to determine the final mixing of that, how much of that goes to
the outputs on the back.
In professional recording studios, usually you will have a specialist who will come in and usually,
called the tonmeister - someone who's a specialist in knowing how to record sound and put the
exact kind of effects processing on it and do the exact kind of mixing it, to make it sound really great
for commercial purposes. So when you hear an artist that you hear on your downloaded sound file,
whether it's Coldplay or Lady Gaga, they usually will have one sound artist and tonmeister who
comes in to do the final mix - because the exact mixing level, how much sound goes to a gate, how
much goes to a reverb or delay, all of that information is a very subtle art form, and a tonmeister issomeone who specializes in creating a particular kind of quality of sound that, that each artist will try
to have, a world of sound that is unique to them.
11) Master output configuration
This is where we finalise all the information and send it to the next destination. We can see the
master level for auxiliary one and two and have a master control for how much gain goes out to
auxiliary one and two overall.
There's a control room output. So if we were in a recording studio, you may want to send your
master outputs into a computer to record the sound, but you may need to hear the sound in the
studio at the same time, so you would have a control room out so we can mix the signal to the
control room, and then you would have another volume control for the control room, but then your
master output volume would be controlling how much goes to the master outputs, which goes to
your mixer and to your recording device. There are multiple ways of sending the information to
wherever it needs to go, for people to be able hear it or monitor it, and then to the recordings areas
separately.
This is a 12-channel input and a 2-channel output. So when you look at a mixer, often you will see
different configurations. This is a very simple mixer but often you will find 24, 48, 72 channels of
input and so you'll see these big long mixers with lots of fader inputs, they're basically having the
same kind of functions there, but most of your more expensive mixers will have a different kind
7/30/2019 Introduction to digital sound design: Week Two Study Notes
http://slidepdf.com/reader/full/introduction-to-digital-sound-design-week-two-study-notes 13/23
output. So the output could be 2, 4, 8, 16, and 24. So if you want sixteen just for channels of output
that means you can assign everything to 16 different outputs that can go into sixteen different
channels in your computer program, or it can go to sixteen different speakers, or all sorts of reasons.
The larger mixers, you're probably going to have different combinations.
You'll have what's called, for example, if you see something that's a 24 by 8 by 2; if you see a mixture
that has that underneath, that means that there are 24 inputs, and then you can send that to either
eight channels and two channel outputs. So you can mix it into eight channels separately, or you can
also combine that into a two channel stereo output.
7/30/2019 Introduction to digital sound design: Week Two Study Notes
http://slidepdf.com/reader/full/introduction-to-digital-sound-design-week-two-study-notes 14/23
Audio File Formats
Let's talk a little bit about the different kind of ways that we can record audio and the different kind
of formats that you'll find on different computers. This sometimes can be confusing as to people, as
to what kind of format, how much memory that's taking up, and the quality of that.
Three Major Groups of Audio Files
1) Uncompressed audio formats, such as WAV, AIFF, AU, or Raw Header-less PCM
This is the main type of file used in recording studios – an uncompressed file format; which means
that when you record your sampling rate of, say audio's going to be at 44 100 or 44.1, thousand
samples per second (you usually would go even higher to 96 and up in professional studio’s), you're
recording that you're getting a lot of data that's representing the sound file into the computer, and
for professional level you don't really want to have that compressed. You want to keep the file and
as much information about the sound as you possibly can; so that as you work with it and edit it andmix it and so forth, you're not diminishing any of the potential sound.
The uncompressed audio formats that you will find are referred to as the WAV, or the AIFF or the AU
or sometimes the raw header-less P C M files. Those are the files that, that are going to be the
largest size, because of the amount information in them.
2) Lossless compression
This is a type of compression, which stores data in less space by eliminating unnecessary data; no
data is permanently removed. The audio data is just packed to save more space; 50% data reduction.
This is in some ways the most preferable of the compression formats, because you're not actually
losing any of the, the original data, you're just reorganizing it into a kind of tinier space, but you
don't get that much saving of memory off of that.
7/30/2019 Introduction to digital sound design: Week Two Study Notes
http://slidepdf.com/reader/full/introduction-to-digital-sound-design-week-two-study-notes 15/23
3) Lossy compression
These are compression algorithms which take the data and try to figure out which of the information
that has been sampled and stored is not necessarily information that's going to be heard, and it
takes out the data that is not really going to be affecting the sound to any great degree. There is a
(slightly audible) data reduction of 80 – 90%.You probably will find on your mobile devices and most
of the MP3's etc. is found in this lossy compression style.
7/30/2019 Introduction to digital sound design: Week Two Study Notes
http://slidepdf.com/reader/full/introduction-to-digital-sound-design-week-two-study-notes 16/23
So those are, the main different kind of audio formats you will find, just remember that most of
your computer sequencing programs that we'll be working with will be recording into an
uncompressed format. Once they're in either in a WAV, or AFF file, we can then usually export them
out of our computer into some kind of compressed format that can be played on your mobile
devices.
7/30/2019 Introduction to digital sound design: Week Two Study Notes
http://slidepdf.com/reader/full/introduction-to-digital-sound-design-week-two-study-notes 17/23
MIDI
MIDI stands for Musical Instrument Digital Interface. This was an important development back in the
early 80's when manufacturers discovered that they could send some information from keyboards to
tone modules to also several electronic devices to where they could communicate.
The Development of MIDI
Sequential Circuits (a company which made popular synthesizers back in the late 70's and
early 80's) engineers, Dave Smith and Chet Wood, devised a universal synthesizer interface
in 1981.
By January 1983, they were able to demonstrate a MIDI connection, digital interface
connection, between the sequential circuits Prophet 600 analogue synthesizer and a Roland
JP-6.
So these were different manufacturers but they both had that capability on them so the informationcould go from one synthesizer into the other one and trigger a device from that.
On 19 August 1983 MIDI specification was published.
And almost all the manufacturers of electronic instruments started to put MIDI capability on their
devices. Meaning that those devices could connect and communicate with each other.
The MIDI commend set includes note-on, note-off, key velocity, pitch bends and other
methods of controlling a synthesizer.
The MIDI commands don't send full audio information, it only send a little bit of computer data anddirects us toward the basic performance information. So what you will find a MIDI stream of
computer data is not the full audio information, but the simple note on and note off.
The MIDI protocol uses an eight bit serial transmission, with one start bit, and one stop bit.
It's running at 31.25 kilo baud sampling rate (Kbs data rate), and is asynchronous (sends data
in 1 direction, 1 character at a time
So a computer number, let's say we have a middle C that we're trying to send. A middle C, in the
MIDI stream data, would be a number 60.
Connections are made through a 5-pin midi cable / DIN plug, of which 3 pins are used.
You will not find midi connectors on a computer. If you want to have midi go into the computer you
have to have some kind of external interface, usually a separate box and midi interface that you can
buy, that would allow you to plug your MIDI connectors, cables from your synthesizers and other
devices into that MIDI interface box. From there you would usually convert it to a USB or Fire Wire,
or some other kind of connector into the computer.
7/30/2019 Introduction to digital sound design: Week Two Study Notes
http://slidepdf.com/reader/full/introduction-to-digital-sound-design-week-two-study-notes 18/23
MIDI controllers
The most important part of midi is: what kind of controller do we have? Because, midi, here again, is
just simple performance information. So, manufacturers started developing different kind of ways of
triggering that midi information. Obviously, the most common type would be a keyboard like this.
Where we're just sending simple data, but there are all sorts of other manufacturers that started
developing different kind of ways of capturing performance data that can be triggering MIDI
performance information.
One of the most popular was different kind of wind controllers.
Yamaha, the WX7 - There's a pressure sensor at the mouthpiece, so you're not actually playing the
notes, but the velocity sensitive trigger and pressed keys with the correct fingering of a clarinet or
saxophone form MIDI note data.
At the same time, Casio developed an instrument, similar to the Yamaha - it has pressure
information coming in from the breath and then it has pitch information, note information coming in
from the keys; but it also had a little speaker on board, so you actually can hear the sounds, like a
quasi-saxophone. But there is a MIDI connector out on this, which allowed you to go into the
computer, and it would function just as easily as these would to, to be triggering the MIDI data.
Other controllers
Now there are other kinds of companies that have developed media controller information that are
no modelled on traditional Western musical instruments. Obviously trying to get the Western
musician to be able to operate and to be able to control the sounds is one part of the market. That,
that when MIDI came out, they wanted to capture as many musicians who were trained on
instruments, to be able to use that training into triggering electronic instruments. But, there were
other experimental devices as well.
There's a company called Infusion Systems that has made a number of different kinds of sensors that
the body can trigger that send MIDI data e.g. I-Cube Touch sensor - Just a little pad sending velocity
information or continuous controller information and pitch. Then I can assign this to whatever mini
data structure I want.
7/30/2019 Introduction to digital sound design: Week Two Study Notes
http://slidepdf.com/reader/full/introduction-to-digital-sound-design-week-two-study-notes 19/23
MIDI part 2
MIDI messages
Channel Voice Messages
Channel voice messages are used to send musical performance information. The messages in this
category are Note On, Note Off, Polyphonic Key Pressure, Channel Pressure, Pitch Bend Change,
Program Change and the Control Change messages.
Note On/ Note Off/ Velocity
In MIDI systems, the activation and the release of a note are considered two separate events.
Note On: When I press a note and send the velocity through the instrument, the note is turned on.
Note Off: When I release and stop blowing, the instrument sends another MIDI note, the same pitch
number but with a zero velocity.
After Touch
Some MIDI keyboard instruments have the ability to sense how much pressure is being applied to
the keys, while they are depressed. Some also have the ability to, only when a note is sustained add
certain effects to it e.g. vibrato
Pitch Bend
The Pitch Bend message is unusually sent from a keyboard instrument, in response to changes in the
pitch bend wheel. Pitch Bend information if used to modify the pitch of sounds played on a
particular channel.
Program Change
The Program Change or Patch change command is used to change the sound of MIDI synthesizer box
or device. Each individual sound is loaded in and assigned to a particular program number.
Command therefor sends a program changed sound number through MIDI device, that would shift
that in that particular those
Bank Select
Controller number zero is defined as the bank select. The bank select function is used in some
synthesizers, on conjunction with the MIDI Program Change message, to expand the number of
different instrument sounds.
7/30/2019 Introduction to digital sound design: Week Two Study Notes
http://slidepdf.com/reader/full/introduction-to-digital-sound-design-week-two-study-notes 20/23
Synthesizer and MIDI Terminology
Polyphony
The polyphony of a sound generator refers to its ability to play more than one note at a time.
Polyphony is generally measured or specified as a number of notes or voices.
Sounds
The different sounds that a synthesizer or sound generator can produce are sometimes called
“patches” or “programs”.
Multi-timbral Mode
A synthesizer or sound generator is said to be multi-timbral if it is capable of producing two or more
different instrument sounds simultaneously. With enough notes of polyphony, and “parts” (multi-
timbral) a single synthesizer can produce the sound of an entire orchestra.
General MIDI (GM) system
The general MIDI specification includes the definition of a general MIDI sound set (a patch map), a
general MIDI percussion map (mapping of percussion sounds to note numbers), and a set of general
MIDI performance capabilities (number of voices, types of MIDI controls recognized etc.). A MIDI
sequence which has been generated for use on a general MIDI instrument, should play correctly on
any General MIDI synthesizer or sound generator.
MIDI hardware controllers
Keyboards
Wind controllers
Drums
Strings
Body sensors
Environmental sensors
Video and touch sensors
Mobile technologies
Software converters
Alternatives
7/30/2019 Introduction to digital sound design: Week Two Study Notes
http://slidepdf.com/reader/full/introduction-to-digital-sound-design-week-two-study-notes 21/23
Digital and Audio effects
Many times when we talk about audio effects we'll be talking about them in both a physical mixture
and the mixtures in the software.
Auxiliary sends / Effects sends
Wet / Dry mix
When we looked at The Mackey analogue mixer earlier, we saw that there were auxiliary sends
which allows us to send the input coming into that channel to an output channel. We have the same
thing built into most of your multi-track recorders and software programs. We will have auxiliary
send where we would; we would do an insert effect.
We would send it to a particular software or plugin which would have that particular effect
processing, which would then be routed back into, to the mix. So when we talk about
At this point we have two different signals: wet and dry. The dry signal is the non-processed sound
or the original sound recording. And the wet sound would be the processed sound.
When routing the sound back into your mix, you can then choose if you only want the wet sound, or
a mixture of the wet and dry sounds. This mixture is very common as it creates a more natural
sound.
Insert effect
An effect that is inserted into the signal path of an audio channel, thereby affecting the entire signal.
Send effect – return
These differ from insert effects in that they are not inserted directly into a channel but rather exist
as a “stand alone” unit.
Post-fader / pre-fader
Controlling how much processed sound is going into the mix, and at the same time, how much of the
mix is going into the effects processer.
Digital and audio effects categories
1) Dynamic Range Processing
Dynamic range is the difference between the quietest and loudest part of the signal.
It is usually used as an insert effect.
Automatically altering the amplitude of audio in such a way that its dynamic range is changed.
The dynamic range can be compressed or expanded.
Compression – narrowing the dynamic range by reducing its amplitude after it reaches a threshold
value. Often less than 3:1 ratio.
Limiter – a compressor with a very high ratio (greater than 10:1)
7/30/2019 Introduction to digital sound design: Week Two Study Notes
http://slidepdf.com/reader/full/introduction-to-digital-sound-design-week-two-study-notes 22/23
Expander – input is boosted faster than normal
Noise gate – low amplitude is mapped straight to zero, removing noise bellow a threshold
Sidechain trigger - triggers the effect
Ducking - reduce the level of one audio signal when another is present. Ducking to reduce music
under voice: 1) insert compressor (insert effect) onto the music track. 2) Create a send on the voice
track that sends part of the track to a Bus. This bus is then chosen in the music track’s compressor as
the sidechain trigger.
Gateing – commonly used for drums
2) Filtering
LPF - low pass filter
HPF – high pass filterBPF - band pass filter
Notch – band reject filter
Wah-wah – BPF with time varying centre frequencies
3) Equalization
EQ – typically consists of several filters of varying types; under one interface, with knobs or sliders to
control the parameters of the filters.
Peaking filters – similar to BPF, except it only affects a very narrow range of frequencies.
Shelving filters – amplifies or attenuates frequencies, above or below a cut-off frequency. (Simple
treble/bass controls on a home or car stereo)
Graphic EQ – splits the spectrum into a number of discreet bands with individual sliders.
Parametric EQ – uses one or more filters (LPH, HPF, shelving, notch or peak). Divides the spectrum
into 1-8 bands, but contains a variable centre frequency for precise control.
4) Time based effects
A memory space that stores incoming audio for some period of time and then combines it with the
original dry signal.
Stereo delay – left and right channels have different delay times.
Echo – a delay of more than 50ms
Chorus – a delay of 10 – 50ms to obtain a richer or fatter sound: modulate delay with LFO and mix
with dry signal.
Flanging – a delay of less than 10ms, modulated with a LFO, resulting in shifting peaks/notches as
the spacing between them periodically rises and lowers.
7/30/2019 Introduction to digital sound design: Week Two Study Notes
http://slidepdf.com/reader/full/introduction-to-digital-sound-design-week-two-study-notes 23/23
Phasing – similar to flanging in producing shifting notches in the spectrum, but uses several all-pass
filters. APFs pass all the frequencies but change the phases of the partials, resulting in notches in the
spectrum when combined with the dry signal and modulated with the LFO.
Reverberation – best used as a send effect
Has three parts:
1) Direct sound
2) Early echo’s – source of sound bouncing off walls and ceiling / high frequency energy is
absorbed.
3) Dense echo’s - later multiple bounces of the source sound off various surfaces – quieter and
darker that early echo’s.
Reverberation plug-ins
Two types:
1) Artificial reverb – uses filters and delays to mimic reflected sounds
2) Convolution (or sampling) reverb – utilizes impulse response recordings from actual spaces /
an impulse response recording was made by making a sharp sound (an impulse) from a stage
of a performance hall and then recording the sound. The mathematical technique of
convolution then results in a sound with the reverb of the hall where the sound was
recorded.
5) Other
Tremolo – “human rate” amplitude modulation
Distortion – nonlinear effect based on wave shaping functions
Time stretching and pitch changing
Sample granulation – time-domain buffer effects for breaking down signals and reconstructing them
in new combinations.
Spatialization – auditory spatial cueing
Sound morphing – extracting parameters of an analysis-resynthesize model