research report “brain listening” – a sound installation...

4
Journal of the Japanese Society for Sonic Arts, Vol.4 No.3 pp.4–7 Research Report “BRAIN LISTENING” – A SOUND INSTALLATION WITH EEG SONIFICATION – Gen Hori †§ Faculty of Business Administration Asia University Tomasz M. Rutkowski ‡§ TARA Life Science Center University of Tsukuba ABSTRACT A sound installation using sonification of EEG (electroen- cephalogram, brain waves) is implemented based on a pre- vious development in 2000. It consists of a wireless EEG headset, a PC with Max/MSP installed on it, and an au- dio interface with active speakers. Two Max/MSP patches for EEG sonification called “frequency modulation of sine wave” and “harmonic structure replication” are described. 1. INTRODUCTION The usefulness and the potential of sonification have been emphasized[1][2] since around 1999 where the target applications span from medical diagnosis and data analysis to general-purpose user interfaces including web browsers and car navigation systems. Around that time, the first author was engaged in the development of an early EEG sonification system and gave his first presentation on the system[3] in DSP Summer School Figure 1. The first author gave his first presentation on EEG sonification[3] in DSPSS2000 at IAMAS. § RIKEN Brain Science Institute 2000 (DSPSS2000), an annual workshop on media arts using Max/MSP hosted by Institute of Advanced Media Arts and Sciences (IAMAS). The present paper reports on a sound installation “Brain Listening” which is a reimplementation based on the development in 2000 that takes advantage of recent updates in EEG measurement systems and PCs (personal computers), in which the listener (the observer) wears a lightweight wireless EEG headset and listens to sounds generated from his/her brain waves to perceive brain activities of which we are unconscious in daily life. The price plummet of EEG measurement systems and the performance upgrade of PCs over that time enables us to implement whole system on a low budget and process 14-channels EEG signals in a realtime manner. The rest of the paper is organized as follows. Section 2 illustrates the hardware organization of “Brain Listening.” Section 3 describes two EEG sonification algorithms us- ing their implementation as Max/MSP patches. Section 4 contains discussion. 2. HARDWARE ORGANIZATION For measuring the listener’s EEG signals, “Brain Lis- tening” uses the Emotiv 1 EEG neuroheadset which is easy to wear and transmits measured signals wirelessly (Fig.2). It sets up 14 electrodes on the scalp, among which eight are assigned to the forehead while six to the side and back of the head (Fig.3). The transmitted EEG signals are re- ceived by the Emotiv USB receiver connected to a USB port of the PC and sonified using Max/MSP 2 . The detail of the sonification will be discussed in the following sec- tion. The sonified data are converted to analog audio sig- nals using the MOTU 3 UltraLite-mk3 Hybrid audio inter- 1 http://www.emotiv.com/ 2 http://cycling74.com/ 3 http://www.motu.com/ – 4–

Upload: buibao

Post on 18-Mar-2018

216 views

Category:

Documents


2 download

TRANSCRIPT

Journal of the Japanese Society for Sonic Arts, Vol.4 No.3 pp.4–7

Research Report

“BRAIN LISTENING”– A SOUND INSTALLATION WITH EEG SONIFICATION –

Gen Hori†§

†Faculty of Business AdministrationAsia University

Tomasz M. Rutkowski‡§

‡TARA Life Science CenterUniversity of Tsukuba

ABSTRACT

A sound installation using sonification of EEG (electroen-cephalogram, brain waves) is implemented based on a pre-vious development in 2000. It consists of a wireless EEGheadset, a PC with Max/MSP installed on it, and an au-dio interface with active speakers. Two Max/MSP patchesfor EEG sonification called “frequency modulation of sinewave” and “harmonic structure replication” are described.

1. INTRODUCTION

The usefulness and the potential of sonification havebeen emphasized[1][2] since around 1999 where thetarget applications span from medical diagnosis and dataanalysis to general-purpose user interfaces includingweb browsers and car navigation systems. Around thattime, the first author was engaged in the developmentof an early EEG sonification system and gave his firstpresentation on the system[3] in DSP Summer School

Figure 1. The first author gave his first presentation onEEG sonification[3] in DSPSS2000 at IAMAS.

§ RIKEN Brain Science Institute

2000 (DSPSS2000), an annual workshop on media artsusing Max/MSP hosted by Institute of Advanced MediaArts and Sciences (IAMAS). The present paper reportson a sound installation “Brain Listening” which is areimplementation based on the development in 2000 thattakes advantage of recent updates in EEG measurementsystems and PCs (personal computers), in which thelistener (the observer) wears a lightweight wireless EEGheadset and listens to sounds generated from his/herbrain waves to perceive brain activities of which we areunconscious in daily life. The price plummet of EEGmeasurement systems and the performance upgrade ofPCs over that time enables us to implement whole systemon a low budget and process 14-channels EEG signals ina realtime manner.

The rest of the paper is organized as follows. Section 2illustrates the hardware organization of “Brain Listening.”Section 3 describes two EEG sonification algorithms us-ing their implementation as Max/MSP patches. Section 4contains discussion.

2. HARDWARE ORGANIZATION

For measuring the listener’s EEG signals, “Brain Lis-tening” uses the Emotiv1 EEG neuroheadset which is easyto wear and transmits measured signals wirelessly (Fig.2).It sets up 14 electrodes on the scalp, among which eightare assigned to the forehead while six to the side and backof the head (Fig.3). The transmitted EEG signals are re-ceived by the Emotiv USB receiver connected to a USBport of the PC and sonified using Max/MSP2 . The detailof the sonification will be discussed in the following sec-tion. The sonified data are converted to analog audio sig-nals using the MOTU3 UltraLite-mk3 Hybrid audio inter-

1 http://www.emotiv.com/2 http://cycling74.com/3 http://www.motu.com/

– 4–

Journal of the Japanese Society for Sonic Arts, Vol.4 No.3 pp.4–7

Figure 2. The Emotiv EEG neuroheadset measures EEGsignals using 14 electrodes and transmits measured sig-nals wirelessly.

Figure 3. The Emotiv EEG neuroheadset sets up 14 elec-trodes on the scalp, among which eight are assigned tofrontal while six to temporal and parietal.

face connected to a USB port of the PC and finally playedby five active speakers connected to the audio interface.The speakers are geometrically located surrounding thelistener and termed “A” to “E” from the left to the rightin Fig.4. The hardware organization explained so far isillustrated in Fig.5.

3. SOFTWARE IMPLEMENTATION

We pick up and discuss two Max/MSP patches for EEGsonification in the following sections. In the patches, EEGsonification is carried out channel-wise with a subpatchthat receives a single channel EEG signal and send out asingle channel audio signal.

Figure 4. The speakers (termed “A” to “E” from the leftto the right in the picture) are geometrically located sur-rounding the listener.

Emotive

PC MOTU

EEG neuroheadset

Audio Interface

A

BC

D

E

Max/MSP

Figure 5. The measured EEG signals are transmittedwirelessly to the PC where Max/MSP sonifies them andsend them the speakers through the audio interface.

3.1. Frequency modulation of sine wave

Fig.6 shows the Max/MSP patch for frequency modu-lation in which the EEG signals are sonified channel-wisewith the subpatch fm and sent to one of the five speakers.For example, the EEG signal of the AF3 channel issonified with the subpatch fm and sent to the speaker C.The sonified EEG signal of each electrode is sent to thespeaker in the direction of the electrode on the scalp sothat the listener listens to the sonified EEG signals as ifs/he is inside his/her brain. The assignments of the EEGchannels to the speakers are as follows: T7+P7+O1→A,

– 5–

Journal of the Japanese Society for Sonic Arts, Vol.4 No.3 pp.4–7

Figure 6. The Max/MSP patch for frequency modulationapplies the subpatch fm (right) to each EEG channel andsends its output to the assigned speaker.

F7+FC5→B, AF3+AF4+F3+F4→C, F8+FC6→D,T8+P8+O2→E. In the subpatch fm, the EEG signalfrom the inlet is multiplied by mod, offset by freq,and used as the frequency of the sine wave generatedby cycle∼, that is, mod and freq are the modulationamount and the carrier frequency respectively.

3.2. Harmonic structure replication

Fig.9 shows the subpatch for harmonic structure repli-cation which is equivalent to the “EEG vocoder” patch(Fig.8) used in DSPSS2000 presentation[3] in Fig.1. The“EEG vocoder” patch is programmed based on the com-mon vocoder4 diagram in Fig.7.

Fig.7 outlines audio signal processing of a vocoder. Avocoder receives two audio signals, the microphone signal(also called the modulator) and the instrument signal (alsocalled the carrier), and modifies the instrument signal insuch a way that it replicates the harmonic structure of themicrophone signal. In the diagram, the black solid linesshow audio signal paths while the gray lines control sig-nal paths. The upper BPF (band pass filter) bank splits themicrophone signal into n frequency bins. Typically thenumber of BPFs is between 10 and 20 and the center fre-quencies of BPFs are equally-spaced on the logarithm ofthe frequency. The rectifiers rectify5 the split microphonesignals and then extract the envelopes for the respectivefrequency bins that represent the harmonic structure of

4 Vocoder (short for voice encoder) is a technique originally de-veloped for telecommunications and then diverted to electronicmusical instruments. Here we refer to a vocoder as an electronicmusical instrument.

5 To fold back the lower half of wave forms to the upper half.

Rectifier Rectifier Rectifier

BPFf1’

BPFf2’

BPFfn’

VCA

BPFf1

BPFf2

BPFfn

VCA VCA

Mic

InstInput

Input

Output

Figure 7. In the vocoder diagram, BPF banks are used toextract and replicate the harmonic structure of the micro-phone signal.

the microphone signal. The lower BPF bank splits the in-strument signal into n frequency bins as well. The VCAs(voltage controlled amplifiers) receive the split instrumentsignals and the envelopes and then output the split instru-ment signals amplified according to the envelopes. In avocoder, the center frequencies of corresponding BPFs arebasically set to the same values,

f ′1 = f1, f ′

2 = f2, . . . , f ′n = fn.

In the “EEG vocoder” patch, we use an EEG signal asthe modulator and white noise as the carrier since it con-tains a broad spectrum across the audio-frequency range.Since the frequency ranges of EEG signals and audio sig-nals differ, we make a correspondence by setting the cen-ter frequencies of BPFs. We use four BPFs and set

f ′1 = 2Hz, f ′

2 = 6Hz, f ′3 = 10Hz, f ′

4 = 20Hz

to extract the envelopes of delta(0-3Hz), theta(4-7Hz),alpha(8-13Hz) and beta(14Hz-40Hz) waves and use themto amplify filtered white noise with the center frequencies,

f1 = 300Hz, f2 = 600Hz, f3 = 900Hz, f4 = 1200Hz.

4. DISCUSSION

A sonification method with the white noise input andthe lower BPF bank replaced by a series of sine wave gen-erators in harmonic structure replication is called overtonemapping[6]. Mathematically, overtone mapping can be re-garded as the limit of harmonic structure replication as theQ factors of the lower BPF banks goes to infinity.

– 6–

Journal of the Japanese Society for Sonic Arts, Vol.4 No.3 pp.4–7

Figure 8. The “EEG vocoder” patch used in DSPSS2000presentation in Fig.1 is programmed based on the vocoderdiagram described in Fig.7.

5. REFERENCES

[1] S. Barrass and G. Kramer, “Using sonification,” Mul-timedia Systems, Vol.7, No.1, pp.23–31, 1999.

[2] T. Hermann and H. Ritter, “Listen to your data:model-based sonification for data analysis,” in Ad-vances in Intelligent Computing and Multimedia Sys-tems, Lasker G. E. ed., pp.189–194, 1999.

[3] G. Hori, “Sound performed by brain wave: towardbrain wave theremin,” DSP Summer School 2000(DSPSS2000), Institute of Advanced Media Arts andSciences (IAMAS), Gifu, Japan, Septermber 13–17,2000.

[4] T. M. Rutkowski, F. Vialatte, A. Cichocki, D. Mandicand A. K. Barros, “Auditory feedback for brain com-puter interface management - an EEG data sonifi-cation approach,” Knowledge-Based Intelligent In-formation and Engineering Systems, Lecture Notesin Artificial Intelligence, Vol.4253, pp.1232–1239,Springer, 2006.

[5] T. Kaniwa, H. Terasawa, M. Matsubara, T. M.Rutkowski and S. Makino, “EEG auditory steady-state synchrony patterns sonification,” Proc. FourthAPSIPA Annual Summit and Conference (APSIPAASC 2012), paper #364, 2012.

[6] H. Terasawa, J. Parvizi and C. Chafe, “SonifyingECOG seizure data with overtone mapping: A strat-egy for creating auditory gestalt from correlated mul-tichannel data,” Proc. Intl. Conf. on Auditory Display(ICAD2012), pp.129–134, 2012.

Figure 9. The subpatch for harmonic structure replicationis equivalent to the “EEG vocoder” patch in Fig.8 and isapplied to each EEG channel.

6. AUTHOR’S PROFILE

Gen Hori

Gen Hori was born and brought up in Japan. He re-ceived the B.E. and the M.E. degrees in mathematical en-gineering and the Ph.D. degree in computer science fromthe University of Tokyo in 1991, 1993, and 1996 respec-tively. He is currently an Associate Professor with Fac-ulty of Business Administration, Asia University, as wellas a visiting scientist at RIKEN Brain Science Institute.His research interests include matrix computation basedon the theory of Lie groups, biomedical signal process-ing, bioinformatics, mobile communication systems, mu-sic information processing utilizing probabilistic models,and natural language processing related to lyrics.

Tomasz M. Rutkowski

Tomasz M. Rutkowski was born in Poland. He livesin Japan and serves as an assistant professor at the Uni-versity of Tsukuba and as a visiting scientist at RIKENBrain Science Institute. Professor Rutkowski’s researchinterests include computational neuroscience, especiallybrain-computer interfacing technologies, computationalmodeling of brain processes, neurobiological signaland information processing, multimedia interfaces andinteractive technology design. He is also a member of the”Brain dreams Music” project (http://http://brain-dreams-music.net/) targeting development of the new musicalinstrument which can be played by brain waves bridgingmusic and neuroscience.

– 7–