smc - ondiola · smc2016.net smc sound & cm ucsi ompnguti network conference s.t.r.e.a.m....

51
SMC2016.net SMC Sound & Music Computing NETWORK CONFERENCE S.T.R.E.A.M. FESTIVAL CONFERENCE GUIDE ABSTRACT BOOKLET SMC HAMBURG/GERMANY 31. 8.–3. 9. 2016 13th Sound & Music Computing Conference

Upload: nguyentuyen

Post on 28-Jan-2019

213 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: SMC - Ondiola · smc2016.net smc sound & cm ucsi ompnguti network conference s.t.r.e.a.m. festival conference guide abstract booklet smc hamburg/germany 31. –3.8

SMC2016.netSMC Sound & Music Computing NETWORK

CONFERENCE

S.T.R.E.A.M. FESTIVAL

CONFERENCE GUIDE

ABSTRACT BOOKLET

SMCHAMBURG/GERMANY

31. 8.–3. 9. 2016

13th Sound & Music Computing Conference

Page 2: SMC - Ondiola · smc2016.net smc sound & cm ucsi ompnguti network conference s.t.r.e.a.m. festival conference guide abstract booklet smc hamburg/germany 31. –3.8

| SCM 2

016

2

COMMITTEE . . . . . . . . . . . . . . . . . . .4SPONSORS . . . . . . . . . . . . . . . . . . . . 6VENUES . . . . . . . . . . . . . . . . . . . . . . 7GENERAL INFORMATION . . . . . . . . . . 7MAPS . . . . . . . . . . . . . . . . . . . . . . . 8SCHEDULE . . . . . . . . . . . . . . . . . . . 12

KEYNOTES . . . . . . . . . . . . . . . . . . . 18

SESSIONS . . . . . . . . . . . . . . . . . .20PAPER SESSIONS . . . . . . . . . . . . . . 26SPECIAL SESSIONS . . . . . . . . . . . . 40POSTER SESSIONS . . . . . . . . . . . . . 43

S.T.R.E.A.M. FESTIVAL . . . . . . . . .58CONCERTS . . . . . . . . . . . . . . . . . . 62INSTALLATIONS . . . . . . . . . . . . . . 86LISTENING ROOM . . . . . . . . . . . . . 92

WELC

OME

WELCOME FROM THE CHAIRMAN OF THE PROGRAM COMMITTEE

Dear Participants:

On behalf of the SMC2016 Program Committee I would like to warmly welcome you to Hamburg and the 13th Sound and Music Computing Conference and Summer School as well as the S.T.R.E.A.M. festival. The summer school, conference and festival are organized by the Hamburg University of Music and Theatre in association with the Hamburg University of Applied Sciences, the University of Hamburg and the Leuphana University Lüneburg.

At this point I would like to express my gratitude to all those who have contributed to this conference: The SMC 2016 Steering Committee and the countless metareviewers, reviewers and subreviewers for their time, advice and opinion that formed a balanced and interesting program, the three co-chairs Rolf Grossmann, Sascha Lemke and Robert Mores for their generous and relentless support, the speakers for preparing and presen-ting presentations that will certainly inspire us, the composers and sonic artists for their exciting contributions, the editorial team for working hard to enable us to present the program and proceedings before the start of the conference, the local organization committee for their enthusiastic efforts, the VAMH, DEGEM, Kampnagel and Finkenau teams that supported us in various ways and, last but by no means least, our sponsors that have helped us financially to turn this conference into reality.

Sincerely,Georg Hajdu

KOnnte das ((audio)) Logo

leider nicht finden

Page 3: SMC - Ondiola · smc2016.net smc sound & cm ucsi ompnguti network conference s.t.r.e.a.m. festival conference guide abstract booklet smc hamburg/germany 31. –3.8

| SCM 2

016

54

COMMITTEES

John DackAlberto de CampoStefano Delle MonacheAnthony De RitisMyriam Desainte-CatherineSimon DixonDaniel DominguezTony DoyleShlomo DubnovBernd EndersHelmut W. ErdmannCumhur ErkutBjoern ErlachMikael FernstromArthur FlexerFederico FontanaMartin von FrantziusJason FreemanMike FrengelAnders FribergHenrik FriskPeter GahnEmilio GallegoAnastasia GeorgakiMichele GeronazzoBruno GiordanoVolker GnannWerner GoeblMasataka GotoFrancesco GraniThomas GrillRolf GrossmannFlorian GroteYupeng GuCarlos GuedesKerry HaganPierre HannaKjetil Falkenberg HansenSarah-Indriyati

HardjowirogoMichael HarenbergMitsuyo HashidaFolkmar HeinJoachim HeintzHannes HoelzlJan Jacob HofmannRisto HolopainenAndrew HornerDaniel Hug

Song HuiLeopold HurtOzgur IzmirliDariusz JackowskiPierre JouvelotYoshihiro KannoHaruhiro KatayoseDamián KellerHowie KentyDavid Kim-BoyleKatharina KlementVolkmar KlienPeter KneesJuraj KojsPanayiotis KokorasReinhard KopiezAndrej KoroliovGeorge KosteletosJohannes S. KreidlerJohannes T. KretzMauro LanzaGoran LazarevicMatthias LeimeisterSascha LemkeMarcia Lemke-KernStéphane LetzHans-Gunter LockLin-Ni LiaoTapio LokkiFilipe Cunha Monteiro

LopesHanna LukashevichSylvain MarchandMatija MaroltDavide Andrea MauroTom MaysPatrick McGlynnAnnamaria MesarosRomain MichonJulia MihalyKostas MoschosDafna NaphtaliPer Anders NilssonVesa NoriloIvana OgnjanovicKonstantina OrlandatouDaniel Overholt

Rui Pedro PaivaStefano PapettiRichard ParncuttDale ParsonJesper PedersenJussi PekonenLuís Antunes PenaMalte PelleterAlfonso PerezNils PetersJean-Francois PetiotMarcelo PimentaMark PlumbleyPietro PolottiPedro J. Ponce De LeónLaurent PottierCarlos Pérez-SanchoYagode QuayMarcelo QueirozRudolf RabensteinLaurie RadfordChristopher RaphaelJosh ReissBernard RichardsonCarlos Andres RicoJane RiglerMichal RinottCurtis RoadsAntonio Roda’Francisco Rodríguez

AlgarraJulian RohrhuberGerard RomaCharalampos SaitisChris SalterAugusto SartiGreg SchiemerAndi SchoonAlexander SchubertVéronique SebastienSertan ŞentürkStefania SerafinJonathan ShapiroMatthew ShlomowitzJohannes S. SistermannsJulius SmithRyan Ross SmithTamara SmythJeffrey Snyder

CHAIRS

Paper ChairRolf Grossmann,

Music ChairSascha Lemke

Chair of Installations Robert Mores

Conference Chair Georg Hajdu

ADVISORY BOARD

Hans-Joachim BraunWolfgang FohlRolf GrossmannRobert MoresClemens Wöllner

SMC BOARD

PresidentStefania Serafin

Conference CoordinatorFederico Avanzini

Summer School CoordinatorEmilia Gomez

Communication CoordinatorCumhur Erkut

Web CoordinatorArshia Cont

Music CoordinatorJuraj Kojs

SMC STEERING COMMITTEE

FranceMyriam Desainte-CatherineDominique FoberGérard AssayagYann Orlarey

ItalyDavide RocchessoRicardo DapelloFederico AvanziniPietro Polotti

GreeceAnastasia GeorgakiIoannis Zannos

GermanyMichael HarenbergMartin SupperStefan WeinzierlMartin Schüttler

SpainXavier SerraEmilia GomezPortugalFabien GouyonCarlos GuedesAlvaro Barbosa

DenmarkStefania SerafinJan Larsen

SwedenRoberto Bresin

United KingdomSimon Dixon

ORGANIZING COMMITTEE

Xiao FuJacob SelloGoran LazarevicCarlos RicoJelena DabicAigerim SeilovaBenedict CareyClarissa WirthMadita WittkopfDaniel DominguezPhilipp KeßlingPhilipp Olbrich

SUBREVIEWERS, REVIEWERS & METAREVIEWERS

Alessandro AnatriniTorsten AndersLuis AntunesAndreas ArztAnders AskenfeltFederico AvanziniStefano BaldanJose R BeltranEmmanouil BenetosSebastian BöckMattia BonafiniSofia BorgesTill BovermannHans-Joachim BraunBill BrunsonIvica BukvicEdmund CampionSergio CanazzaYinan CaoBenedict CareyPeter CastineChris ChafeQiangbin ChenEric ChouSe-Lien ChuangMarko CicilianiFionnuala Conway

Page 4: SMC - Ondiola · smc2016.net smc sound & cm ucsi ompnguti network conference s.t.r.e.a.m. festival conference guide abstract booklet smc hamburg/germany 31. –3.8

| SCM 2

016

76

SPONSORS

This conference is supported by the Landesfoschungsförderung Hamburg

VENUES

CONFERENCE HAWUniversity of Applied SciencesFinkenau 35, 22081 HamburgTelephone: +49 / 40 / 428 75-0www.haw-hamburg.de

S.T.R.E.A.M. FESTIVAL

HAW (lunch concerts)see above

Kampnagel Jarrestraße 20, 22303 HamburgTelephone: +49 / 40 / 27 09 49 49www.kampnagel.de

KEYNOTES are being given at both locations.

Please refer to the SMC website or the pages XXXXXX of this program.

IN GENERAL

REGISTRATION/DESK

The registration desk at the foyer of the Finkenau Media Campus opens on Wednesday, August 31, 2016 at 12.00.

Opening hours:Thursday and Friday 9.00–17.00 Saturday 9.00–15.00

CONFERENCE FEE

The conference fee includes the bag with conference and festival program, a festival pass, ticket to the buffet, and a name badge granting access to all events at the Finkenau Campus.

PLEASE WEAR YOUR NAME BADGE AT THE CONFERENCE AND CONCERT VENUES.

BUFFET

The buffet will take place on September 2, at Kampnagel (KMH) from 21.30.

INTERNET ACCESS

Eduroam is available at the Finkenau Media Campus.

Kampnagel is providing public access in its foyer.

GENERAL I

NFO

RMAT

ION

Jorge SolisSimone SpagnolGeorgia SpiropoulosManfred StahnkeIpke StarkeNikos StavropoulosMartin SupperGregory SurgesKenji SuzukiPierre-Andre TaillardTapio TakalaAkira TakaokaHans TammenLuis TeixeiraMari TervaniemiFelix ThiesenEtienne ThoretRenee TimmersGiuseppe Torre

Jim TorresenYu-Chung TsengShawn TrailCaroline TraubeKen UenoMarti UmbertVesa ValimakiLeon Van NoordenDouglas Van NortAkito van TroyerDomenico VicinanzaLindsay VickeryGualtiero VolpeAndreas WeixlerJeremy WellsCaroline WhitingKin Hong WongJim WoodhouseLonce Wyse

Clemens WöllnerJiajun YangWoon Seung YeoSteven YiKazuyoshi YoshiiJaeseong YouIoannis ZannosMassimiliano ZanoniIvan ZavadaPaolo ZavagnaFengyun Zhu

Page 5: SMC - Ondiola · smc2016.net smc sound & cm ucsi ompnguti network conference s.t.r.e.a.m. festival conference guide abstract booklet smc hamburg/germany 31. –3.8

| SCM 2

016

98

GENERAL I

NFO

RMAT

ION

LECTURES/CONCERTSForum Finkenau

(E.001)

MUSEBOT CHILLOUT SESSION

Flur(1.8001 A)

ERDGESCHOSS?

1. OG?

POSTER SESSIONS

Library(2.001)

COPECO/STUDENTINSTALLATIONS

(2.014–2.019)

3. OG? XXXXX

E33

IL SE TOURNA(E39)

KANON-SPIEL

(E46)

(E45)

OPERATIONDEEP POCKETS

(E45)

RIZOMA ALCINA

(E42)

AGGREGAT(E48)

R1(3.016)

Flur(3.8002)

R5(3.010)

R4(3.012)

R6(3.017)

A1(3.015)

A1 (3.015) – VOX NIHILIR1 (3.016) – SONIFICATION OF DARK MATTERR4 (3.012) – DETRIUS IIR5 (3.010) – SOUND KALEIDOR6 (3.017) – LISTENING ROOMFlur (3.8002) – THE NEUROMUSIC

2. OG? XXXXX

E33

IL SE TOURNA(E39)

KANON-SPIEL

(E46)

(E45)

OPERATIONDEEP POCKETS

(E45)

RIZOMA ALCINA

(E42)

AGGREGAT(E48)

MEDIA CAMPUS, FINKENAU 35

Page 6: SMC - Ondiola · smc2016.net smc sound & cm ucsi ompnguti network conference s.t.r.e.a.m. festival conference guide abstract booklet smc hamburg/germany 31. –3.8

| SCM 2

016

1110

GENERAL I

NFO

RMAT

ION

KAMPNAGEL, JARRESTRASSE 20

Page 7: SMC - Ondiola · smc2016.net smc sound & cm ucsi ompnguti network conference s.t.r.e.a.m. festival conference guide abstract booklet smc hamburg/germany 31. –3.8

| SCM 2

016

1312

CONFE

RENCE S

CHEDULE

CONFERENCE SCHEDULEWEDNESDAY – 31.8. THURSDAY – 1.9. FRIDAY – 2.9. SATURDAY – 3.9.

9.00 | Finkenau ForumWelcome

9.20–10.40 | Finkenau ForumSESSION I:Rooms, Spaces, Environments

9.20–10.40 | Finkenau ForumSESSION IV:Performing Sound: Gestures and Data

9.20–10.40 | Finkenau ForumSESSION VII: Digital Learning Environments: Singing, Analysing and Coding

10.40–11.00 | Finkenau Foyer & CafeteriaCoffee Break

10.40–11.00 | Finkenau Foyer & CafeteriaCoffee Break

10.40–11.00 | Finkenau Foyer & CafeteriaCoffee Break

11.00–12.00 | Finkenau ForumKEYNOTE: Cat HopeArtistic research teams in electronic music

11.00–12.00 | Finkenau ForumKEYNOTE: Eduardo MirandaTowards Natural and Artificial Intelligence Coupling with Music Neurotechnology

11.20–12.40 | Finkenau ForumSESSION VIII: Computational Musicology and Mathematical Music Theory

12.40–13.00 | Finkenau ForumFarewell and Presentation of SMC2017

14.00 | Finkenau FoyerOpening of Registration

12.00–14.40 | Finkenau Forum, Produktionslabor, Library & CafeteriaPosters I, Concerts, Lunch

12.00–14.40 | Finkenau Forum, Produktionslabor, Library & CafeteriaPosters II, Concerts, Lunch

13.00–15.00 | Finkenau Forum, Produktions-labor, Library & CafeteriaPosters III, Concert, Lunch

15.00–17.30 | Finkenau ForumSHOWCASE HAMBURG:Research and Development in Sound and Audio

14.40–16.00 | Finkenau ForumSESSION II:Musical Instruments and Sonic Interaction Design

14.40–16.00 | Finkenau ForumSESSION V:History and Aesthetics of Digital Audio

16.00–16.20 | Finkenau Foyer & CafeteriaCoffee Break

16.00–16.20 | Finkenau Foyer & CafeteriaCoffee Break

17.30 | Finkenau GardenOpening of MAELSTROM

16.20–17.40 | Finkenau ForumSESSION III: Perspectives on Perception and Performance

16.20–17.40 | Finkenau Ditze-SaalSPECIAL ‘Giovanni De Poli’ SESSION: Instrument, Interface, Space

16.20–17.40 | Finkenau ForumSESSION VI: Algorithmic Generation of Sound and Music

16.20–17.40 | Finkenau Ditze-SaalSPECIAL SESSION: Real-time Composition and Animated Notation

17.40–19.00 Dinner 17.40–19.00 Dinner 17.40–19.00 Dinner

19.00–20.00 | Kampnagel K2Round Table Discussion

19.00–20.00 | Kampnagel K2KEYNOTE: Trevor PinchThe Social Construction of Technology and the Social Construction of Sound: the Moog Synthesizer and its Sounds

19.00–20.00 | Kampnagel K2CELEBRATION & KEYNOTE: John ChowningThe Early Years of Computer Music and Ligeti’s Dream – Stanford, Paris, CCRMA, Hamburg

18.00–22.00 | Kampnagel K2 & KMHCoPeCo CONCERTS

20.00–22.30 | Kampnagel K2 & KMHCONCERTS

20.00–22.30 | Kampnagel K2 & KMHCONCERTS

20.00–22.30 | Kampnagel K2, KMH & K4CONCERTS & Buffet

Page 8: SMC - Ondiola · smc2016.net smc sound & cm ucsi ompnguti network conference s.t.r.e.a.m. festival conference guide abstract booklet smc hamburg/germany 31. –3.8

| SCM 2

016

1514

CONCERT S

CHEDULE

SPECIAL EVENT: HAMBURG HARBOR BOAT TRIPSUNDAY, SEPTEMBER 4 16 .00–19 .00

LOCATION Landungsbrücken, Brücke 10, inner side

FEATURING Spectral Youth, Henning Rietz, Carlos Rico, Sergio Vasquez and Takuro Shibayama

As a sort of coda to the SMC2016 conference and S.T.R.E.A.M. festival, we have prepared a special event for our guests, which will take place on Sunday, 4.9.2016 at 16.00 and last for 3 hours (till 19.00). We will offer a hybrid experience of a sight-seeing tour, concert and party! And it will all take place on a wonderful quaint little boat, taking us along the shores of Elbe river and the harbour, with a great view of Elbphilharmonie, of course! Once an hour we will be making short stops and the entire ride will be moderated.

Our special guest will be the band Spectral Youth, led by Carla Genchi, one of our CoPeCo students. Part of an event will also be a block of specially selected pieces from our “Call for Sonic Works” which qualified for this particular venue, as well as some of the pieces of our own graduates.

The ride will cost 40 Euro per person and apart from a great sight-seeing, awesome music and nice conversations with colleagues, there will be an open bar as well

Due to the ship’s size, we are limited to max. 50 people, so go and get your tickets!

HOW TO GET THERE? Simply meet us all at Landungsbrücken and take a stroll to Brücke 10, inner side, or visit our web-site for more info.

FINKENAU CONCERTSWEDNESDAY31.8.

THURSDAY1.9.

FRIDAY2.9.

SATURDAY3.9.

12.00 | ProduktionslaborLUNCH CONCERT I

12.00 | ProduktionslaborLUNCH CONCERT III

12.00 | ProduktionslaborLUNCH CONCERT V

13.30 | ForumLUNCH CONCERT II

13.30 | ForumLUNCH CONCERT IV

17.30 | GardenCONCERT/INSTALLATION: MAELSTROM

KAMPNAGEL CONCERTSWEDNESDAY31.8.

THURSDAY1.9.

FRIDAY2.9.

SATURDAY3.9.

19.00 | K2ROUND TABLE:John Chowning, Manfred Stahnke, Karl Steinberg, Martin Supper & more

19.00 | K2KEYNOTE:Trevor Pinch

19.00 | K2KEYNOTE:John Chowning

18.00 | K2CONCERT VI:CoPeCo Final Projects I

20.00 | K2CONCERT I:Alexander Schubert

20.00 | K2CONCERT III:Real-time Composition, Improvisation and Animated Score I

20.00 | K2CONCERT V:John Chowning & Friends

21.30 | KMHCONCERT II:Showcase Hamburg

21.30 | KMHCONCERT IV:Real-time Composition, Improvisation and Animated Score II

21.00 | KMHBUFFET

20.30 | KMHCONCERT VII:CoPeCo Final Projects II

22.00 | K4CONCERT VI:Concert-Party by Jacob Sello

Page 9: SMC - Ondiola · smc2016.net smc sound & cm ucsi ompnguti network conference s.t.r.e.a.m. festival conference guide abstract booklet smc hamburg/germany 31. –3.8

| SCM 2

016

1716

10.40–11.00 | Finkenau Foyer & CafeteriaCoffee Break

11.00–12.00 | Finkenau ForumKEYNOTE: Eduardo MirandaTowards Natural and Artificial Intelligen-ce Coupling with Music Neurotechnology

12.00–14.40 | Finkenau LibraryPosters II

12.15 | Finkenau ProduktionslaborLunch Concert III

13.30 | Finkenau ForumLunch Concert IV

14.40–16.00 | Finkenau ForumSESSION V:History and Aesthetics of Digital AudioFinkenau Forum

16.00–16.20 | Finkenau Foyer & CafeteriaCoffee Break

16.20–17.40 | Finkenau ForumSESSION VI:Algorithmic Generation of Sound and Music

16.20–17.40 | Finkenau Ditze-SaalSPECIAL SESSION:Real-time Composition and Animated Notation

19.00–20.00 | Kampnagel K2CELEBRATION & KEYNOTE: John ChowningThe Early Years of Computer Music and Ligeti’s Dream — Stanford, Paris, CCRMA, Hamburg

20.00 | Kampnagel K2CONCERT III:John Chowning & Friends

21.00 | Kampnagel KMHBuffet

22.00 | Kampnagel K4Concert-Party by Jacob Sello

SATURDAYSEPTEMBER 3, 2016

9.20–10.40 | Finkenau ForumSESSION VII:Digital Learning Environments: Singing, Analysing and Coding

10.40–11.00 | Finkenau Foyer & CafeteriaCoffee Break

11.20–12.40 | Finkenau ForumSESSION VIII:Computational Musicology and Mathe-matical Music Theory

12.40–13.00 | Finkenau ForumFairwell and Presentation of SMC 2017

13.00–14.30 | Finkenau LibraryPosters III

13.15 | Finkenau ProduktionslaborLunch Concert V

18.00 | Kampnagel K2CoPeCo Concert I

20.30 | Kampnagel KMHCoPeCo Concert II

SUNDAYSEPTEMBER 4, 2016

16.00–19.00 | St. Pauli LandungsbrückenMusical Boat Trip around the Hamburg Harbor

INSTALLATIONS

Installations will be exhibited at HAW, Finkenau Campus. For the floor plans of HAW, please consult the maps on the pages 8–10 of this program.

DAY

BY D

AY S

CHEDULE

WEDNESDAYAUGUST 31, 2016

14.00 | Finkenau FoyerOpening of Registration

15.00–17.30 | Finkenau ForumSHOWCASE HAMBURG:Research and Development in Sound and Audio

17.30 | Finkenau GardenOpening of MAELSTROM

19.00–20.00 | Kampnagel K2Round Table Discussion with John Chowning, Albrecht Schneider, Manfred Stahnke, Karl Steinberg and Martin Supper Moderation: Georg Hajdu

20.00 | Kampnagel K2CONCERT I:Alexander Schubert Portrait Concert

21.30 | Kampnagel KMHCONCERT II:Showcase Hamburg Concert Curated by VAMH

THURSDAYSEPTEMBER 1, 2016

9.00 | Finkenau ForumWelcome

9.20–10.40 | Finkenau ForumSESSION I:Rooms, Spaces, Environments

10.40–11.00 | Finkenau Foyer & CafeteriaCoffee Break

11.00–12.00 | Finkenau ForumKEYNOTE: Cat HopeArtistic research teams in electronic music

12.00–14.40 | Finkenau LibraryPosters I

12.15 | Finkenau ProduktionslaborLunch Concert I

13.30 | Finkenau ForumLunch Concert II

14.40–16.00 | Finkenau ForumSESSION II:Musical Instruments and Sonic Interacti-on Design

16.00–16.20 | Finkenau Foyer & CafeteriaCoffee Break

16.20–17.40 | Finkenau ForumSESSION III: Perspectives on Perception and Perfor-mance

16.20–17.40 | Finkenau Ditze-SaalSPECIAL ‘Giovanni De Poli’ SESSION: Instrument, Interface, Space

19.00–20.00 | Kampnagel K2KEYNOTE: Trevor PinchThe Social Construction of Technology and the Social Construction of Sound: the Moog Synthesizer and its Sounds

20.00 | Kampnagel K2CONCERT III:Real-time Composition, Improvisation and Animated Score I

21.30 | Kampnagel KMHCONCERT IV:Real-time Composition, Improvisation and Animated Score II

FRIDAYSEPTEMBER 2, 2016

9.20–10.40 | Finkenau ForumSESSION IV:Performing Sound: Gestures and Data

DAY BY DAY SCHEDULE

Page 10: SMC - Ondiola · smc2016.net smc sound & cm ucsi ompnguti network conference s.t.r.e.a.m. festival conference guide abstract booklet smc hamburg/germany 31. –3.8

KEYNOTES

1918

FRIDAY, SEPTEMBER 2EDUARDO MIRANDATOWARDS NATURAL AND ARTIFICIAL INTELLIGENCE COUPLING WITH MUSIC NEUROTECHNOLOGYFINKENAU, FORUM | 11 .00–12 .00

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Integer condimentum neque odio. Nulla libero mi, semper vel risus quis, interdum pellentesque nibh. Proin eu vehicula nisi, in lacinia magna. Ut eget nulla quis est volutpat lacinia. Vivamus ac lacus velit. Cras suscipit porta faucibus. Nam at purus vel est aliquet cursus. Etiam quis mollis ante. Morbi odio diam, tincidunt sed commodo eget, congue ut neque. Cras magna enim, hendrerit a felis id, vestibulum commodo neque. Suspendisse viverra condimentum risus id dictum. Maecenas sit amet massa feugiat, ultricies ligula a, pellentesque est. Donec vestibulum ut ex quis finibus.

FRIDAY, SEPTEMBER 2JOHN CHOWNINGTHE EARLY YEARS OF COMPUTER MUSIC AND LIGETI’S DREAM – STANFORD, PARIS, CCRMA, HAMBURGKAMPNAGEL, K2 | 19 .00–20 .00

Limited by the cost of memory, computer systems 50 years ago provided composers only one of the sound generating and processing methods that are available today — synthesis. With the well-conceived and accessible synthesis programs by the future-thin-king Max Mathews, we learned much about the perception of sound as we wrapped our aural skills around the technology and discovered how to create music from fundamental units — to compose music from the inside out. In 1964 I began learning to program a computer.

Inspired by the electroacoustic music in Europe, especially from Cologne, my first interest was in spatialization, which led to the discovery of FM synthesis in 1967. Turenas was the culmination of those years of learning and developing when I first presented it to a very small audience, which included Ligeti, at Stanford University in April 1972.

Ligeti, who had arrived at Stanford a few months before and with whom I had spent many hours discussing music and computers, fully understood that computers were the future of electroacoustic music and because of his musical stature he became an exceptionally convincing advocate. Ligeti with Jean-Claude Risset nudged Boulez’s thinking in 1972 toward the digital domain that, until then, was barely on the horizon of IRCAM’S planning. Then with the participation of Computer Science at the Universität Hamburg and CCRMA at Stanford, he nearly succeeded in establishing a computer music center at the Hochschule für Musik und Theater in Hamburg.

KEYNOTESTHURSDAY, SEPTEMBER 1CAT HOPEARTISTIC RESEARCH TEAMS IN ELECTRONIC MUSICFINKENAU, FORUM | 11 .00–12 .00

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Integer condimentum neque odio. Nulla libero mi, semper vel risus quis, interdum pellentesque nibh. Proin eu vehicula nisi, in lacinia magna. Ut eget nulla quis est volutpat lacinia. Vivamus ac lacus velit. Cras suscipit porta faucibus. Nam at purus vel est aliquet cursus. Etiam quis mollis ante. Morbi odio diam, tincidunt sed commodo eget, congue ut neque. Cras magna enim, hendrerit a felis id, vestibulum commodo neque. Suspendisse viverra condimentum risus id dictum. Maecenas sit amet massa feugiat, ultricies ligula a, pellentesque est. Donec vestibulum ut ex quis finibus.

THURSDAY, SEPTEMBER 1TREVOR PINCHTHE SOCIAL CONSTRUCTION OF TECHNOLOGY AND THE SOCIAL CONSTRUCTION OF SOUND: THE MOOG SYNTHESIZER AND ITS SOUNDSKAMPNAGEL, K2 | 19 .00–20 .00

Musical instrument design is one of the most sophisticated and specialized technologies that we humans have developed.” – Robert Moog. If music is about technologies then how should we think about technologies? In this talk I will approach the history of electronic music as conceived by another academic discipline: Science and Technology Studies (STS). The mutual shaping of technology and culture offers clues to thinking about the development of the technology used in electronic music and how it shapes and is shaped by culture. The STS approach draws attention to “opening the black box” , which means understanding the design and working of musical technologies and thinking about how they could be otherwise and paths not taken. The STS approach leads scholars to focus upon users, tinkerers, and intermediaries and particular contexts and spaces where instruments are developed, tested, demonstrated, and used. I will argue that following salesmen is as important as following composers. My main examples will come from the development of the Moog and Buchla electronic music synthesizers developed in the period 1964-75 but I will also reflect a little on the nascent British synthesizer industry from that period - particularly EMS.

Page 11: SMC - Ondiola · smc2016.net smc sound & cm ucsi ompnguti network conference s.t.r.e.a.m. festival conference guide abstract booklet smc hamburg/germany 31. –3.8

OVERVIEW | S

ESSIONS

2120

Maria Abela Scicluna, Adrian Muscat &  Victor ButtigiegA Study on the Use of Perceptual Features for Music Emotion Recognition

Jean-Michaël Celerier, Myriam Desainte- Catherine & Jean-Michel CouturierRethinking the audio workstation : tree-based sequencing with i-score and the LibAudioStream

Arvid Ong & Reihard KopiezThe perceptual similarity of tone clusters: an experimental approach to the listening of avant-garde music

Marcus Maeder & Roman ZweifelTrees: An artistic-scientific observation system

PAPER SESSION IV PERFORMING SOUND: GESTURES & DATAFRIDAY, SEPTEMBER 2 9 .20–10 .40FINKENAU FORUM

Jesper Hohagen & Clemens WöllnerMovement Sonification of Musical Gestures: Investigating Perceptual Processes Underlying Musical Performance Movements

Dom Brown, Chris Nash & Tom MitchellGestureChords: Transparency in Gesturally Controlled Digital Musical Instruments through Iconicity and Conceptual Metaphor

Francesco Grani, Diego Di Carlo, Jorge Madrid Portillo, Matteo Girardi, Razvan Paisa, Jian Stian Banas, Iakovos Vogiatzo-glou, Dan Overholt, Stefania SerafinGestural control of wavefield synthesis

Jan C. Schacher, Daniel Bisig & Patrick NeffExploring Gesturality in Music Performance

PAPER SESSION V HISTORY AND AESTHETICS OF DIGITAL AUDIOFRIDAY, SEPTEMBER 214 .40–16 .00FINKENAU FORUM

Visda Goudarzi & Artemi-Maria GiotiEngagement and interaction in participatory sound art

Florian GroteInterfaces for Sound: Representing Material in Pop Music Productions

Anders Bach PedersenA Liberated Sonic Sublime: Perspectives On The Aesthetics &  Phenomenology Of Sound Synthesis

Adrian FreedDavid Wessel’s Slabs: a case study in Preventative Digital Musical Instrument Conservation

PAPER SESSION VIALGORITHMIC GENERATION OF SOUND & MUSICFRIDAY, SEPTEMBER 214 .40–16 .00FINKENAU FORUM

Dimos Makris, Ioannis Karydis &  Emilios CambouropoulosVisa3: refining the voice integration/ segregation algorithm

Michael Olsen, Julius Smith &  Jonathan AbelA Hybrid Filter-Wavetable Oscillator Technique for FOF Synthesis

Gilberto Bernardes, Luis Aly &  Matthew DaviesSEED: Resynthesizing Environmental Sounds from Examples

Christodoulos Aspromallis Speaker &  Nicolas E GoldForm-aware, real-time adaptive music generation for interactive experience

PAPER SESSIONSHAMBURG SHOWCASEWEDNESDAY, AUGUST 3115 .00–17 .30FINKENAU FORUM

15.00 | Wolfgang Fohl & Robert MoresUniversity of Applied Sciences (HAW)Research in Sound Analysis and Design

15.30 | Clements Wöllner & Tim ZiemerUniversity of Hamburg (UHH), Institute of Systematic Musicology Recent Research Projects Carried Out at the Institute of Systematic Musicology

16.00 | Georg Hajdu & Panos KoliasHamburg University of Music and Drama (HfMT) Research, Development and Practice of Music Software Design

16.30 | Sarah-Indriyati HardjowirogoLeuphana University Lüneburg Aesthetic Strategies Research and Teaching in the Field of Music and Auditory Culture from a Cultural Sciences Perspective (audio)

17.00 | Hans-Joachim BraunHelmut Schmidt University Hamburg Musical Aesthetics, Creativity, Aircraft Noise Control

PAPER SESSION IROOMS, SPACES, ENVIRONMENTSTHURSDAY, SEPTEMBER 19 .20–10 .40FINKENAU FORUM

Chikashi Miyama, Götz Dipper, Robert Krämer & Jan SchacherZirkonium, SpatDIF and mediaartbase.de An archiving strategy for spatial music at ZKM

Florian Meyer, Malte Nogalski &  Wolfgang FohlDetection Thresholds in Audio-Visual Redirected Walking

Jan C. Schacher, Nils Peters & Trond LossiusAuthoring Spatial Music with SpatDIF Version 0.4

Martin Ljungdahl Eriksson, Ricardo Atienza & Lena ParetoSound Bubbles: the aesthetic additive design approach to actively enhance acoustic office environments

PAPER SESSION II MUSICAL INSTRUMENTS & SONIC INTERACTION DESIGNTHURSDAY, SEPTEMBER 110 .40–12 .00FINKENAU FORUM

Stefano Baldan, Stefano Delle Monache, Davide Rocchesso & Hélène LachambreSketching sonic interactions by imitation-driven sound synthesis

Luca Turchet, Andrew McPherson &  Carlo FischioneSmart Instruments: Towards an Ecosystem of Interoperable Devices Connecting Performers and Audiences

Yu-Chung Tseng, Bo-Ting Li &  Tsung-Hua WangPrototyping wireless integrated wearable interactive music system

Ragnhild Torvanger Solberg &  Alexander Refsum JenseniusOptical or Inertial? Evaluation of Two Motion Capture Systems for Studies of Clubbing to Electronic Dance Music

PAPER SESSION IIIPERSPECTIVES ON PERCEPTION & PERFORMANCETHURSDAY, SEPTEMBER 116 .20–17 .40FINKENAU FORUM

Page 12: SMC - Ondiola · smc2016.net smc sound & cm ucsi ompnguti network conference s.t.r.e.a.m. festival conference guide abstract booklet smc hamburg/germany 31. –3.8

OVERVIEW | S

ESSIONS

2322

Jason Freeman, Brian Magerko, Doug Edwards, Morgan Miller, Roxanne Moore &  Anna XambóUsing EarSketch to Broaden Participation in Computing and Music

Arne EigenfeldtExploring Moment-form in Generative Music

Sever TipeiEmerging Composition: Being and Becoming

POSTERSESSIONSPOSTER SESSION ITHURSDAY, SEPTEMBER 112.00–14.40FINKENAU LIBRARY

Tomoyasu Nakano & Masataka GotoLyricListPlayerA Consecutive-Query-by-Playback Interface for Retrieving Similar Word Sequences from Different Song Lyrics

Andreas Henrici & Martin NeukomSynchronization in chains of van der pol oscillators

Alessandro AnatriniThe state of the art on the educational software tools for electroacoustic composition

Nuria Bonet, Alexis Kirke &  Eduardo MirandaSonification of Dark Matter: Challenges and Opportunities

Matthias Flückiger, Tobias Grosshauser &  Gerhard TrösterPrecision Finger Pressing Force Sensing in the Pianist-Piano Interaction

Rui Penha & Gilberto BernardesBeatings: a web application to foster the renaissance of the art of musical temperaments

Francis Stevens, Damian Murphy &  Stephen SmithEmotion and soundscape preference rating using semantic differential pairs and the self-assessment manikin

Romain Michon, Chris Chafe, Nick Gang, Mishel Johns, Sile O’Modhrain, Matthew Wright, David Sirkin, Wendy Ju &  Nikhil GowdaA Faust Based Driving Simulator Sound Synthesis Engine

Benedict Carey & Burak UlasVr ‘space opera’: mimetic spectralism in an immersive starlight audification system

Karim M. Ibrahim & Mahmoud AllamPrimary-Ambient Extraction Using Adaptive Weighting and Principal Component Analysis

Sertan Şentürk, Gopala Krishna Koduri &  Xavier SerraA Score-Informed Computational Description of Svaras Using a Statistical Model

Juan Jose BurredFactorsynth: a max tool for sound analysis and resynthesis based on matrix factorization

Christof Martin Schultz & Marten SeedorfThe loop ensemble – open source instruments for teaching electronic music in the classroom

POSTERS SESSION IIFRIDAY, SEPTEMBER 212 .00–14 .40FINKENAU LIBRARY

Romain Michon, Julius Orion Iii Smith, Chris Chafe, Ge Wang & Matthew James WrightNuance: Adding Multi-Touch Force Detection to the iPad

PAPER SESSION VII DIGITAL LEARNING ENVIRONMENTS: SINGING, ANALYSING & CODINGSATURDAY, SEPTEMBER 3 9 .20–10 .40FINKENAU FORUM

Masaru Arai, Mitsuyo Hashida & Haruhiro KatayoseRevealing the secret of “groove” singing: analysis of j-pop music

Rong Gong, Yile Yang & Xavier SerraPitch Contour Segmentation for Computer-aided Jingju Singing Training

Fotios Moschos, Anastasia Georgaki &  Georgios KouroupetroglouFONASKEIN: An interactive application software for the practice of the singing voice

Todd HarropModulating or ‘Transferring’ Between Non-octave Scales

PAPER SESSION VIIICOMPUTATIONAL MUSICOLOGY & MATHEMATICAL MUSIC THEORYSATURDAY, SEPTEMBER 311 .20–12 .40FINKENAU FORUM

Masatoshi Hamanaka, Keiji Hirata &  Satoshi TojodeepGTTM-II: Automatic Generation of Metrical Structure based on Deep Learning Technique

Víctor Padilla & Darrell ConklinStatistical generation of two-voice florid counterpoint

Guangyu Xia & Roger DannenbergExpressive humanoid robot for automatic accompaniment

Romain Michon & Ge WangFaucK!! Hybridizing the FAUST and ChucK Audio Programming Languages

SPECIAL SESSIONS SPECIAL SESSION I ‚GIOVANNI DE POLI‘THURSDAY, SEPTEMBER 1 16 .20–17 .40FINKENAU DITZE-SAAL

Federico Avanzini, Sergio Canazza, Giovanni De Poli, Carlo Fantozzi, Edoardo Micheloni, Niccolò Pretto, Antonio Roda’, Silvia Gasparotto & Giuseppe SalemiVirtual reconstruction of an ancient greek pan flute

Jimmie Paloranta, Anders Lundström, Ludvig Elblaus, Roberto Bresin &  Emma FridInteraction with a large sized augmented string instrument intended for a public setting

Federico Fontana, Ivan Camponogara, Matteo Vallicella, Marco Ruzzenente &  Paola CesariAn exploration on whole-body and foot-based vibrotactile sensitivity to melodic consonance

Michele Geronazzo, Jacopo Fantin, Giacomo Sorato, Guido Baldovino &  Federico AvanziniThe selfear project: a mobile application for low-cost pinna-related transfer function acquisition

SPECIAL SESSION IICOMPOSITION & NOTATIONFRIDAY, SEPTEMBER 216 .20–17 .40FINKENAU DITZE-SAAL

Ken Déguernel, Emmanuel Vincent &  Gérard AssayagUsing Multidimensional Sequences For Improvisation In The OMax Paradigm

Page 13: SMC - Ondiola · smc2016.net smc sound & cm ucsi ompnguti network conference s.t.r.e.a.m. festival conference guide abstract booklet smc hamburg/germany 31. –3.8

OVERVIEW | S

ESSIONS

2524

Sandor Mehes, Maarten van Walstijn &  Paul StapletonTowards a Virtual-Acoustic String Instrument

Juan J. Bosch & Emilia GomezMelody extraction based on a source- filter model using pitch contour selection

Grigore BurloiuAn Online Tempo Tracker for Automatic Accompaniment based on Audio-to-audio Alignment and Beat Tracking

Stefano FascianiTSAM: a tool for analyzing, modeling, and mapping the timbre of sound synthesizers

Konstantinos Trochidis, Carlos Guedes, Akshay Anantapadmanabhan &  Andrija KlaricCAMeL: Carnatic Percussion Music Generation Using N-Gram Models

Jeremy Ham & Daniel ProhaskyDeveloping a parametric spatial design framework for digital drumming

Luca TurchetThe Hyper Hurdy-Gurdy/ The Hyper Zampogna

Philippe KocherPolytempo composer: a tool for the computation of synchronisable tempo progressions

Naithan BosseSoundScavenger: An Interactive Soundwalk

Shengchen Li, Dawn Black, Mark Plumbley & Simon DixonA model selection test on effective factors of the choice of expressive timing clusters for a phrase

Brent LeeThis is an important message for Julie Wade: Emergent Performance Events in an Interactive Installation

Andreas Almqvist Gref, Ludvig Elblaus &  Kjetil Falkenberg HansenSonification as catalyst in training manual wheelchair operation for sports and everyday life

Alyssa AskaImprovisation and gesture as form determinants in works with electronics

Sertan Şentürk & Xavier SerraComposition Identification in Ottoman-Turkish Makam Music Using Transposition-Invariant Partial Audio- Score Alignment

Benjamin Stahl & Iohannes ZmölnigMusical sonification in electronic therapy aids for motor-functional treatment – a smartphone approach

Sivaramakrishnan Meenakshisundaram, Eduardo Miranda & Irene Kaimi Factors Influencing Vocal Pitch in Articulatory Speech Synthesis: A Study Using PRAAT

Olga Slizovskaia, Emilia Gomez &  Gloria HaroAutomatic musical instrument recognition in audiovisual recordings by combining image and audio classification strategies

Jan-Torsten MildeTeaching Audio Programming with the Neonlicht-Engine

Hiroki Nishino & Adrian CheokLazy Evaluation in Microsound Synthesis

POSTERS SESSION IIISATURDAY, SEPTEMBER 312 .00–14 .40FINKENAU LIBRARY

Hiroki Nishino & Adrian D. CheokSpeculative Digital Sound Synthesis

Roberto Bresin, Ludvig Elblaus, Emma Frid, Federico Favero, Lars Annersten, David Berner & Fabio MorrealeSound forest/ljudskogen: a large-scale string-based interactive musical instrument

Elliot Kermit-CanfieldA Virtual Acousmonium for Transparent Speaker Systems

Eita Nakamura, Kazuyoshi Yoshii &  Shigeki SagayamaRhythm transcription of polyphonic midi performances based on a merged- output hmm for multiple voices

Josh Mycroft, Josh Reiss & Tony StockmanVisually Representing and Interpreting Multivariate Data for Audio Mixing

Michael Mcloughlin, Luca Lamoni, Ellen Garland, Simon Ingram, Alexis Kirke, Michael Noad, Luke Rendell &  Eduardo MirandaAdapting a Computational Multi Agent Model for Humpback Whale Song Research for use as a Tool for Algorithmic Composition

Page 14: SMC - Ondiola · smc2016.net smc sound & cm ucsi ompnguti network conference s.t.r.e.a.m. festival conference guide abstract booklet smc hamburg/germany 31. –3.8

ABSTRACTS | S

ESSIONS

2726

Research, Development and Practice of Music Software DesignGeorg Hajdu & Panos Kolias

The Hamburg University of Music and Theater (HfMT) has a long tradition in electronic and computer music going back to the mid-1980s. Ever since the establishment of a program in multimedia composition the school became a playground for experimental composers pursuing artistic research projects as well as hardware and software development in areas such as non-standard music notation, new instrument design, networked multimedia performance, sound spatialization as well as interactive music theater.

The growing interest in digital media amongst students in the performing arts programs has not only fostered the teaching of commercial software packages such as Logic or Sibelius, but also increased the need of actively participating in its improvement. The software Melodyne represents such a case allowing both artistic sound manipulation and surgical correction:

Celemony’s Melodyne analyzes audio files and allows the user to manipulate single notes (even in harmonic context) in order to create new sounds completely different from the original or at the other extreme, to correct technical issues without affecting the quality or the performance of the original recording.

Research and Teaching in the Field of Music and Auditory Culture from a Cultural Sciences PerspectiveSarah-Indriyati Hardjowirogoaudio

Aesthetic Strategies is an interdisciplinary division within the Institute of Culture and Aesthetics of Digital Media (ICAM) at Leuphana University Lüneburg. ((audio)) is involved in research and teaching (BA and MA Cultural Sciences) in the field of music and auditory culture from a cultural sciences perspective. Research at ((audio)) focuses on questions concerning music and digital media in the sub-fields of (1) technoculture, (2) media integration, interfaces, surfaces, (3) sampling and program control, and (4) data networks as cultural spaces, thereby contributing to a better understanding of the aesthetics, methods and techniques of digital music production and engaging in theories of media-cultural change. In teaching, ((audio)) puts particular importance on the combination of theoretical education and musical practice. With the Workroom Digital Audio and the audioLab as a high-end production facility, students as well as composers and researchers are provided access to the technological resources required for professional digital audio production. Both research and teaching at ((audio)) reflect the belief that academic education should not only be based on a fixed canon of skills and knowledge but should rather be understood as an organized communication process reflecting both local competence and innovative ability.

The presentation introduces the division’s current activities in research and teaching and is complemented by a practical demonstration of recent student works.

HAMBURG SHOWCASEResearch in Sound Analysis and DesignWolfgang Fohl & Robert Mores

At the University of Applied Sciences, Hamburg, in the faculty of ‘Design, Media and Information’ and the faculty of ‘Engineering and Computer Science’ the areas of research in sound analysis and design are: Human-computer interaction in virtual acoustic environments – redirected walking, gesture control of virtual sound sources, WFS rendering for interactive environments.Spatial audio rendering – virtual room acoustics based on measured or computed room impulse responses, WFS enhancements for elevation rendering.Multimedia – prototypes for 3D-audio-video production workflow.Sound analysis – instrument identification from guitar sounds ([email protected]) Immersive Audio – 3D sound installations, 3D Audio formats and perception, film sound, audio design and production ([email protected])Stringed sound – violin auralization from physical models, high-level violin auralization from acoustical fingerprints taken from Italian masterpieces, pre-manufacturing acoustic design tools for luthiers, timbre perception and representation ([email protected])Interactive Musical Sequences – between notation and compositional thought is as much a gap as between digital music production tools and computer music languages. The Interactive Musical Sequencer combines hierarchic structuring with parametric modification and is capable of generating and reproducing musical content with a familiar and simple user experience and interface. ([email protected])

Recent Research Projects Carried Out at the Institute of Systematic MusicologyClements Wöllner & Tim Ziemer

The Institute of Systematic Musicology at the University of Hamburg is among the largest research centres of its kind in Germany and hosts about 150 students enrolled in BA, MA and PhD programmes. Faculty members have specialized in musical acoustics and music psychology, and taught courses also include popular music studies, empirical aesthetics, audiovisual media and music business as well as sociological and ethnographic approaches to music.

This presentation will briefly focus on recent research projects carried out at the institute. In music psychology, these topics include research on synchronisation, motion capture of musical gestures, human movement sonification, prototypical perception, audiovisual quality judgments, and the study of musical joint actions. In musical acoustics, research covers fields such as sound field synthesis, physical modeling, radiation characteristics of musical instruments, and spatial sound localizations.

Page 15: SMC - Ondiola · smc2016.net smc sound & cm ucsi ompnguti network conference s.t.r.e.a.m. festival conference guide abstract booklet smc hamburg/germany 31. –3.8

ABSTRACTS | S

ESSIONS

2928

Sound Bubbles: the aesthetic additive design approach to actively enhance acoustic office environmentsMartin Ljungdahl Eriksson, Ricardo Atienza & Lena Pareto

Moving towards more open and collaborative workplaces has been an emerging trend in the last decades. This change has led to workers sharing a common open space, with seating’s based on current activity. This makes it difficult to design sonic environments that cater to different needs in the same space. In this study we explored the possibility of adding adaptive sound environments to enhance the experience of an activity-based office workplace. For this purpose, we developed the concept of the “sound bubble,” a micro-space in which the user is embedded by a semi-transparent added sound environment. This makes it possible for the user to stay in “everyday listening” mode, i.e., not focusing on anything particular in the surrounding environment while being able to keep the link with it.

A total of 43 test subjects participated in an experience-based test, conducting their usual work tasks in an office landscape. Our results show that the sound bubble can enhance auditory work conditions for individual work requiring concentration.

Zirkonium, SpatDIF and mediaartbase.de; an Archiving Strategy for Spatial Music at ZKMChikashi Miyama, Götz Dipper, Robert Krämer & Jan C. Schacher

ZKM | Institute for Music and Acoustics has been contributing to the production and realization of spatial music for more than 20 years. This paper introduces how the institute archives the spatial compositions, maximizing the universality, reusability, and accesibility for performances and research in the future by combining three key elements: Zirkonium, SpatDIF, and mediaartbase.de.

PAPER SESSION II Prototyping Wireless Integrated Wearable Interactive Music SystemYu-Chung Tseng, Bo-Ting Li & Tsung-Hua Wang

This paper presents the development of a wireless integrated wearable interactive music system – Musfit. The system was built according the intension of integrating the motion of hands (fingers), head, and feet of a performer to music performance. The device of the system consists of a pair of gloves, a pair of shoes, and a cap, which were embedded various sensors to detect the body motion of a performer. The data from detecting was transmitted to computer via wireless device and then mapped into various parameters of sound effectors built on Max/MSP for interactive music performance.

The ultimate goal of the system is to free the performing space of the player, to increase technological transparency of performing and, as a result, to promote the interests of interactive music performance.

At the present stage, the progression of prototyping a wireless integrated wearable interactive music system has reached the goal we expected. Further studies are needed in order to assess and improve playability and stability of the system, in such a way that it can be effectively employed in concerts eventually.

Musical Aesthetics, Creativity, Aircraft Noise ControlHans-Joachim Braun

In research on different aspects of sound and music, the Helmut Schmidt University offers a mixed fare. For some time, psychologist Thomas Jacobsen and his group have been researching on experimental music aesthetics, for example on the neural dissociation between musical emotions and liking in experts and laypersons.

A book on creativity in technology and music, edited by Hans-Joachim Braun, has just come out, assessing, inter alia, what cognitive science has to say on creative processes in invention and engineering design and on musical composition and improvisation. What do they have in common?

Andreas Möllenkamp explores the history of music software development and its implications for musical practice and artistic strategies while, in the field of acoustics, Udo Zölzer and his group work on audio coding with short and, hopefully, no delay; on “upmix” from stereo to multi-channel, and, regarding guitar effects, on digital simulation of analog electronic circuits. With a A400M turboprop transport aircraft on campus, Delf Sachau and team have successfully applied an active noise reduction system.

PAPER SESSION I Authoring Spatial Music with SpatDIF Version 0.4Jan C. Schacher, Nils Peters, Trond Lossius & Chikashi Miyama

SpatDIF, the Spatial Sound Description Interchange Format is a light-weight, human-readable syntax for storing and transmitting spatial sound scenes, serving as an independent, cross-platform and host-independent solution for spatial sound composition. The recent update to version 0.4 of the specification introduces the ability to define and store continuous trajectories on the authoring layer in a human-readable way, as well as describing groups and source spreading. As a result, SpatDIF provides a new way to exchange higher level authoring data across authoring tools that help to preserve the artistic intent in spatial music.

Detection Thresholds in Audio-Visual Redirected WalkingFlorian Meyer, Malte Nogalski & Wolfgang Fohl

Redirected walking is a technique to present users a walkable virtual environment that is larger than the extent of the physical space of the reproduction room. For the proper application of this technique, it is necessary to determine the detection threshold for the applied manipulations. In this paper, an experiment to measure the detection levels of redirected walking manipulations in an audio-visual virtual environment is described, results are presented and compared to previous results of a purely acoustically controlled redirected walking experiment.

1 Zeile kürzen --->

Page 16: SMC - Ondiola · smc2016.net smc sound & cm ucsi ompnguti network conference s.t.r.e.a.m. festival conference guide abstract booklet smc hamburg/germany 31. –3.8

ABSTRACTS | S

ESSIONS

3130

used alongside a set of small inertial trackers from Axivity and regular video recordings. The conclusion is that it is possible to fine-tune an infrared tracking system to work satisfactory for group studies of complex body motion in a “club-like” environment. For ecological studies in a real club setting, however, inertial tracking is the most scalable and flexible solution.

PAPER SESSION IIITrees: An Artistic-Scientific Observation SystemMarcus Maeder & Roman Zweifel

In our research project «trees: Rendering eco-physiological processes audible» we connected acoustic emissions of plants with ecophysiological processes and rendered audible natural phenomena that aren’t normally noticeable in an artistic way. The acoustic emissions of a tree in the Swiss Alps were recorded with special sensors, and all other non-auditory ecophysiological measurement data (e.g. the trunk and branch diameters that change depending on water content, the sap flow rate in the branches, the water present in the soil, air moisture, solar radiation, etc.) were sonified, i.e. translated into sounds. The recordings and sonified measurements were implemented in a number of different media art installations, which at the same time served as a research environment, in order to examine and experiment artistically with the temporal and spatial connections between plant sounds, physiological processes and environmental conditions in an artistic-scientific observation system.

The perceptual similarity of tone clusters: an experimental approach to the listening of avant-garde musicArvid Ong & Reinhard Kopiez

This study examines the musical tone cluster as a prototypical sound of avant-garde music in the 20th and 21st century. Tone clusters marked a turning point from pitch-related techniques of composing in earlier epochs to the sound-based materials used in avant-garde music. Henry Cowell offered the first theoretical reflection about the structure of clusters with a focus on tone density which relies on the number of tones and the ambitus of the cluster. In this paper we show first results of a sound discrimination experiment when participants had to rate sound similarity of prototypical cluster sounds varying in density. The results show congruency between theoretical features of the cluster structure, results of the timbre feature analysis, and perceptual evaluation of stimuli. The correlation between tone cluster density and psychoacoustical roughness was r = .95 and between roughness and similarity ratings r = .74. Overall, the similarity ratings of cluster sounds can be grouped into two classes of sounds: (a) those clusters with a high grade of perceptual discrimination depending on the cluster density and (b) those clusters of a more aurally saturated structure making it difficult to separate and evaluate them. Additionally, the relation between similarity ratings and psychoacoustic features was also examined. Our findings can provide valuable insights into aural training methods for avant-garde music.

Smart Instruments: Towards an Ecosystem of Interoperable Devices Connecting Performers and AudiencesLuca Turchet, Andrew McPherson  & Carlo Fischione

This paper proposes a new class of augmented musical instruments, “Smart Instruments”, which are characterized by embedded computational intelligence, bidirectional wireless connectivity, an embedded sound delivery system, and an onboard system for feedback to the player. Smart Instruments bring together separate strands of augmented instrument, networked music and Internet of Things technology, offering direct point-to-point communication between each other and other portable sensor-enabled devices, without need for a central mediator such as a laptop. This technological infrastructure enables an ecosystem of interoperable devices connecting performers as well as performers and audiences, which can support new performer-performer and audience-performer interactions. As an example of the Smart Instruments concept, this paper presents the Sensus Smart Guitar, a guitar augmented with sensors, onboard processing and wireless communication.

Sketching Sonic Interactions by Imitation-Driven Sound SynthesisStefano Baldan, Stefano Delle Monache, Davide Rocchesso & Hélène Lachambre

Sketching is at the core of every design activity. In visual design, pencil and paper are the preferred tools to produce sketches for their simplicity and immediacy. Analogue tools for sonic sketching do not exist yet, although voice and gesture are embodied abilities commonly exploited to communicate sound concepts. The EU project SkAT-VG aims to support vocal sketching with computer-aided technologies that can be easily accessed, understood and controlled through vocal and gestural imitations. This imitation-driven sound synthesis approach is meant to overcome the ephemerality and timbral limitations of human voice and gesture, allowing to produce more refined sonic sketches and to think about sound in a more designerly way. This paper presents two main outcomes of the project: The Sound Design Toolkit, a palette of basic sound synthesis models grounded on ecological perception and physical description of sound-producing phenomena, and SkAT-Studio, a visual framework based on sound design workflows organized in stages of input, analysis, mapping, synthesis, and output. The integration of these two software packages provides an environment in which sound designers can go from concepts, through exploration and mocking-up, to prototyping in sonic interaction design, taking advantage of all the possibilities offered by vocal and gestural imitations in every step of the process.

Optical or Inertial? Evaluation of Two Motion Capture Systems for Studies of Clubbing to Electronic Dance MusicRagnhild Torvanger Solberg & Alexander Refsum Jensenius

What type of motion capture system is best suited for studying dancing to electronic dance music? The paper discusses positive and negative sides of using camera-based and sensor-based motion tracking systems for group studies of dancers. This is exemplified through experiments with a Qualisys infrared motion capture system being

Page 17: SMC - Ondiola · smc2016.net smc sound & cm ucsi ompnguti network conference s.t.r.e.a.m. festival conference guide abstract booklet smc hamburg/germany 31. –3.8

ABSTRACTS | S

ESSIONS

3332

sound as well categorical data. The resulting insights aim primarily at developing tools for automated gestural analysis that can be used both for musical research and to control interactive systems in live performance.

GestureChords: Transparency in Gesturally Controlled Digital Musical Instruments through Iconicity and Conceptual MetaphorDom Brown, Chris Nash & Tom Mitchell

This paper presents GestureChords, a mapping strategy for chord selection in freehand gestural instruments. The strategy maps chord variations to a series of hand postures using the concepts of iconicity and conceptual metaphor, influenced by their use in American Sign Language (ASL), to encode meaning in gestural signs. The mapping uses the conceptual metaphors MUSICAL NOTES ARE POINTS IN SPACE and INTERVALS BETWEEN NOTES ARE SPACES BETWEEN POINTS, which are mapped respectively to the number of extended fingers in a performer’s hand and the abduction or adduction between them. The strategy is incorporated into a digital musical instrument and tested in a preliminary study for transparency by both performers and spectators, which gave promising results for the technique.

Movement Sonification of Musical Gestures: Investigating Perceptual Processes Underlying Musical Performance MovementsJesper Hohagen & Clemens Wöllner

Truslit (1938) developed a theory on the gestural quality of musical interpretations. Self-other judgment paradigms of visual point-light movements allow elucidating action-perception coupling processes underlying musical performance movements as described by Truslit. Employing movement sonification with a continuous parameter mapping approach may further show parallels between the audio information of music, physical movements, and audio information based on sonified movement parameters. The present study investigates Truslit’s hypothesis of prototypical musical gestures by comparing free movements and movements following detailed instructions recorded by a 12-camera optical motion capture system. The effects of watching these movements and listening to the sonification were tested within a multimodal self-other recognition task. A total of 26 right-handed participants were tracked with a motion capture system while executing arm movements along with Truslit’s (1938) original musical examples. The second experimental part consisted of a multimodal self-other perception judgment paradigm, presenting sequences to the same participants (matched with those of two other participants, unbeknown to them) under four different conditions. Signal detection analyses of the self-other recognition task addressed judgment sensitivity by calculating for individual participants. While self-recognition was successful for visual, audiovisual and still image examples, movement sonification did not provide sufficient detail on performer’s agency. Nevertheless, a number of relevant sonification parameters is discussed.

A Study on the Use of Perceptual Features for Music Emotion RecognitionMaria Abela Scicluna, Adrian Muscat & Victor Buttigieg

Perceptual features are defined as musical descriptors that closely match a listener’s understanding of musical characteristics. This paper tackles Music Emotion Recognition through the consideration of three kinds of perceptual feature sets, human rated, computational and modelled features. The human rated features are extracted through a survey and the computational features are estimated directly from the audio signal. Regressive modelling is used to predict the human rated features from the computational features. The latter predicted set constitute the modelled features. The regressive models performed well for all features except for Harmony, Timbre and Melody. All three feature sets are used to train three regression models (one for each set) to predict the components Energy and Valence, which are then used to recognise emotion. The model trained on the rated features performed well for both components. This therefore shows that emotion can be predicted from perceptual features. The models trained on the computational and modelled features performed well in predicting Energy, but not so well in predicting Valence. This is not surprising since the main predictors for Valence are Melody, Harmony and Timbre, which therefore need added or modified computational features that better match human perception.

Rethinking the audio workstation: tree-based sequencing with i-score and the LibAudioStreamJean-Michaël Celerier, Myriam Desainte-Catherine & Jean-Michel Couturier

The field of digital music authoring provides a wealth of creative environments in which music can be created and authored : patchers, programming languages, and multitrack sequencers. By combining the i-score interactive sequencer to the LibAudioStream audio engine, a new music software able to represent and play rich interactive audio sequences is introduced. We present new stream expressions compatible with the LibAudioStream, and use them to create an interactive audio graph: hierarchical stream and send – return streams. This allows to create branching and arbitrarily nested musical scores, in an OSC-centric environment. Three examples of interactive musical scores are presented: the recreation of a traditional multi-track sequencer, an interactive musical score, and a temporal effect graph.

PAPER SESSION IV

Exploring Gestuality in Music PerformanceJan C. Schacher, Daniel Bisig & Patrick Neff

Perception of gesturality in music performance is a multi-modal phenomenon and is carried by the differentiation of salient features in movement as well as sound. In a mix of quantitative and qualitative methods we collect sound and motion data, Laban effort qualifiers, and in a survey with selected participants subjective ratings and categorisations. The analysis aims at uncovering correspondences in the multi-modal information, using comparative processes to find similarity/differences in movement,

Page 18: SMC - Ondiola · smc2016.net smc sound & cm ucsi ompnguti network conference s.t.r.e.a.m. festival conference guide abstract booklet smc hamburg/germany 31. –3.8

ABSTRACTS | S

ESSIONS

3534

Engagement and Interaction in Participatory Sound ArtVisda Goudarzi & Artemi-Maria Gioti

This paper explores a variety of existing interactive and participatory sound systems and the role of different actors in them. In human computer interaction (HCI), the focal point on studying interactive systems has been the usability and functionality of the systems. We are trying to shift the focus more towards creative aspects of interaction in both technology development and sound creation. In some sound art works, the control is in the hand of the technology creators, in some others in the hand of composers, and sometimes in the hand of the performers or the audience members. Some challenges in such systems are the ownership of technical and aesthetic components, balancing engagement and interaction among different stakeholders (designer, composer, spectator, etc) and encouraging audience engagement. We propose a discussion on participation, human-computer and human-human interaction within the process of creation and interaction with the system.

A Liberated Sonic Sublime: Perspectives on the Aesthetics & Phenomenology of Sound SynthesisAnders Bach Pedersen

In this paper I will investigate the aesthetics of electronic sound synthesis and the contemporary sublime in an analysis and discussion of interrelated phenomenological, philosophical and cultural considerations through chosen sound and music examples. I argue that the aesthetic experience of sonic timbres that seem unearthly to us resembles that of a transcendental sublime in the uncanny experience of the synthesis of both known and unknown sounds and the overall immaterial materiality of electricity. Both experimental music and “switched-on” reinterpretations are addressed through explorations of sound in time, space and technology and I discuss if we as listeners are able to differentiate materiality from its superficial cognates when challenged by sonic doppelgängers. Concepts of sonorous perception are taken into account from a phenomenological point-of-reference with the purpose of suggesting a liberation of sound synthesis, arguing the transcendence of its boundaries in the physical world being possible through the aesthetics surrounding an un-fathomable technological sublime in the art of synthesizing electricity.

David Wessel’s Slabs: a Case Study in Preventative Digital Music Instrument ConservationAdrian Freed

David Wessel‘s Slabs is being conserved as an important element of CNMAT‘s collection of electronic music and computer music instruments and controllers. This paper describes the strategies being developed to conserve the instrument and how we are reaching for the goals of both maintaining the symbolic value of the instrument as a prize-winning, highly-regarded example of the “composed instrument” paradigm and “use value” as an example students and scholars can interact with to develop their own composed instruments. Conservation required a sensitive reconfiguration and re-housing of this unique instrument that preserves key original components while rearranging them and protecting them from wear and damage.

Gestural control of wavefield synthesisFrancesco Grani, Diego Di Carlo, Jorge Madrid Portillo, Matteo Girardi, Razvan Paisa, Jian Stian Banas, Iakovos Vogiatzoglou, Dan Overholt & Stefania Serafin

We present a report covering our preliminary research on the control of spatial sound sources in wavefield synthesis through gesture based interfaces. After a short general introduction on spatial sound and few basic concepts on wavefield synthesis, we presents a graphical application called spAAce which let users control real-time movements of sound sources by drawing trajectories on a screen. The first prototype of this application has been developed bound to WFSCollider, an open-source software based on Supercollider which let users control wavefield synthesis. The spAAce application has been implemented using Processing, a programming language for sketches and prototypes within the context of visual arts, and communicates with WFSCollider through the Open Sound Control protocol. This application aims to create a new way of interaction for live performance of spatial composition and live electronics.

In a subsequent section we present an auditory game in which players can walk freely inside a virtual acoustic environment (a room in a commercial ship) while being ex- posed to the presence of several “enemies”, which the player needs to localize and eliminate by using a Nintendo WiiMote game controller to “throw” sounding objects towards them. Aim of this project was to create a gestural interface for a game based on auditory cues only, and to investigate how convolution reverberation can affects people’s perception of distance in a wavefield synthesis setup environment.

PAPER SESSION V Interfaces for Sound: Representing Material in Pop Music ProductionFlorian Grote

Sound is the foundation of music composition in contemporary popular cultures. As the medium of popular music, sound is an important dimension in which artists establish their identities and become recognizable. This presents a radical departure from the focus on written notation and a preset corpus of instrument timbres found in classical Western-European music. To create in the medium of sound, contemporary composers utilize digital production systems with new interfaces, many of which are built upon the waveform representation as their cornerstone. This waveform representation is an interesting bridge between the analog world, from where it borrows its appearance as a seemingly continuous line, and the digital world in which it exists as a visualization of a digital model describing continuous audio material in discrete sample points. This opens up possibilities to augment the waveform representation with interactions for algorithmic transformations of the audio material. The paper investigates the cultural implications of such interfaces and provides an outlook into their possible futures.

Page 19: SMC - Ondiola · smc2016.net smc sound & cm ucsi ompnguti network conference s.t.r.e.a.m. festival conference guide abstract booklet smc hamburg/germany 31. –3.8

ABSTRACTS | S

ESSIONS

3736

Form-Aware, Real-Time Adaptive Music Generation for Interactive ExperiencesChristodoulos Aspromallis Speaker & Nicolas E Gold

Many experiences offered to the public through interactive theatre, theme parks, video games, and virtual environments use music to complement the participants’ activity. There is a range of approaches to this, from straightforward playback of ‘stings’, to looped phrases, to on-the-fly note generation. Within the latter, traditional genres and forms are often not represented, with the music instead being typically loose in form and structure. We present work in progress on a new method for real-time music generation that can preserve traditional musical genres whilst being reactive in form to the activities of participants. The results of simulating participant trajectories and the effect this has on the music generation algorithms are presented, showing that the approach can successfully handle variable length forms whilst remaining substantially within the given musical style.

PAPER SESSION VII FONASKEIN: An Interactive Application Software for the Practice of the Singing VoiceFotios Moschos, Anastasia Georgaki & Georgios Kouroupetroglou

A number of software applications for the practice of the singing voice have been introduced the last decades, but all of them are limited to equal tempered scales. In this work, we present the design and development of FONASKEIN, a novel modular interactive software application for the practice of singing voice in real time and with visual feedback for both equal and non-equal tempered scales. Details of the Graphical User Interface of FONASKEIN are given, along with its architecture. The evaluation results of FONASKEIN in a pilot experiment with eight participants and with four songs in various musical scales showed its positive effect in practice of the singing voice in all cases.

Revealing the Secret of “Groove” Singing: Analysis of J-pop MusicMasaru Arai, Mitsuyo Hashida, Haruhiro Katayose & Tastuya Matoba

In music, “groove” refers to the sense of rhythmic “feel” or swing. Groove, which was originally introduced to describe the taste of a band’s rhythm section, has been expanded to non-rhythmic sections and to several genres and has become a key facet of popular music. Some studies have analyzed groove by investigating the delicate beat nuances of playing the drums. However, the nature of groove that is found in continuous sound has not yet been elucidated. To describe the nature of groove, we conducted an evaluative study using a questionnaire and balance method based on signal processing for vocal melodies sung by a professional popular music vocalist. We found that the control over (voiced) consonants followed by vowels constitutes an expression that is crucial to groove in J-pop vocal melodies. The experimental results suggest that time-prolongation and pitch overshoot added to voiced consonants made listeners perceive the vowels that follow to be more accentuated, eventually enhancing listeners’ perceptions of groove elements in vocal melodies.

PAPER SESSION VI

SEED: Resynthesizing Environmental Sounds from ExamplesGilberto Bernardes, Luis Aly & Matthew Davies

In this paper we present SEED, a generative system capable of infinitely extending recorded environmental sounds while preserving their inherent structure. The system architecture is grounded in concepts from concatenative sound synthesis and includes three top-level modules for segmentation, analysis, and generation. An input audio signal is first temporally segmented into a collection of audio events, which are then reduced into a dictionary of audio classes, by means of an agglomerative clustering algorithm. This representation, together with a concatenation cost between audio segment boundaries, is finally used to generate sequences of audio segments with arbitrary-long duration. The system output can be varied in the generation process by the simple and yet effective parametric control over the creation of the natural, temporally coherent, and varied audio renderings of environmental sounds.

VISA3: Refining the Voice Integration/Segregation AlgorithmDimos Makris, Ioannis Karydis & Emilios Cambouropoulos

Human music listeners are capable of identifying multiple ‘voices’ in musical content. This capability of grouping notes of polyphonic musical content into entities is of great importance for numerous processes of the Music Information Research domain, most notably for the better understanding of the underlying musical content’s score. Accordingly, we present the VISA3 algorithm, a refinement of the family of VISA algorithms for integration/segregation of voice/streams focusing on musical streams. VISA3 builds upon its previous editions by introduction of new characteristics that adhere to previously unused general perceptual principles, address assignment errors that accumulate affecting the precision and tackle more generic musical content. Moreover, a new small dataset with human expert ground-truth quantized symbolic data annotation is utilized. Experimental results indicate the significant performance amelioration the proposed algorithm achieves in relation to its predecessors. The increase in precision is evident for both the dataset of the previous editions as well as for a new dataset that includes musical content with characteristics such that of non-parallel motion that are common and have not yet been examined.

A Hybrid Filter-Wavetable Oscillator Technique for Formant-Wave-Function SynthesisMichael Olsen, Julius Smith & Jonathan Abel

In this paper a hybrid filter-wavetable oscillator implementation of Formant-Wave-Function (FOF) synthesis is presented where each FOF is generated using a second-order filter and wavetable oscillator pair. Similar to the original time-domain FOF implementation, this method allows for separate control of the bandwidth and skirtwidth of the formant region generated in the frequency domain by the FOF synthesis. Software considerations are also taken into account which improve the performance and flexibility of the synthesis technique.

Page 20: SMC - Ondiola · smc2016.net smc sound & cm ucsi ompnguti network conference s.t.r.e.a.m. festival conference guide abstract booklet smc hamburg/germany 31. –3.8

ABSTRACTS | S

ESSIONS

3938

Stastical Generation of Two-Voice Florid CounterpointVíctor Padilla & Darrell Conklin

In this paper, we explore a method for statistical generation of music based on the style of Palestrina. First, we find patterns in one piece that are selected and organized according to a probabilistic distribution, using horizontal viewpoints to describe melodic properties of events. Once the template is chosen and covered, two-voice counterpoint in a florid style is generated using a first-order Markov model with constraints obtained from the template. For constructing the model, vertical slices of pitch and rhythm are compiled from a corpus of Palestrina masses. The template enforces different restrictions that filter the possible paths through the generation process. A double backtracking algorithm is implemented to handle cases where no solutions are found at some point within a path.

FaucK!! Hybridizing the FAUST and ChucK Audio Programming LanguagesRomain Michon & Ge Wang

This paper presents a hybrid audio programming environment, called FaucK, which combines the powerful, succinct Functional AUdio STream (FAUST) language with the strongly-timed ChucK audio programming language. FaucK allows programmers to on-the-fly evaluate FAUST code directly from ChucK code and control FAUST signal processors using ChucK’s sample-precise timing and concurrency mechanisms. The goal is to create an amalgam that plays to the strengths of each language, giving rise to new possibilities for rapid prototyping, interaction design and controller mapping, pedagogy, and new ways of working with both FAUST and ChucK. We present our motivations, approach, implementation, and preliminary evaluation. FaucK is open-source and freely available.

Expressive Humanoid Robot for Automatic AccompanimentGuangyu Xia, Roger Dannenberg, Mao Kawai, Kei Matsuki & Atsuo Takanishi

We present a music-robotic system capable of performing an accompaniment for a musician and reacting to human performance with gestural and facial expression in real time. This work can be seen as a marriage between social robotics and computer accompaniment systems in order to create more musical, interactive, and engaging performances between humans and machines. We also conduct subjective evaluations on audiences to validate the joint effects of robot expression and automatic accompaniment. Our results show that robot embodiment and expression improve the subjective ratings on automatic accompaniment significantly. Counter intuitively, such improvement does not exist when the machine is performing a fixed media and the human musician simply follows the machine. As far as we know, this is the first interactive music performance between a human musician and a humanoid music robot with systematic subjective evaluation.

Pitch Contour Segmentation for Computer-Aided Jingju Singing TrainingRong Gong, Yile Yang & Xavier Serra

Imitation is the main approach of jingju (also known as Beijing opera) singing training through its inheritance of nearly 200 years. Students learn singing by receiving auditory and gestural feedback cues. The aim of computer-aided training is to visually reveal the student’s intonation problem by representing the pitch contour on segment- level. In this paper, we propose a technique for this purpose. Pitch contour of each musical note is segmented automatically by a melodic transcription algorithm incorporated with a genre-specific musicological model of jingju singing: bigram note transition probabilities defining the probabilities of a transition from one note to another. A finer segmentation which takes into account the high variability of steady segments in jingju context enables us to analyse the subtle details of the intonation by subdividing the note’s pitch contour into a chain of three basic vocal expression segments: steady, transitory and vibrato. The evaluation suggests that this technique outperforms the state of the art methods for jingju singing. The web prototype implementation of these techniques offers a great potential for both in-class learning and self-learning.

Modulating or ‚Transferring‘ Between Non-Octave ScalesTodd Harrop

The author searches for non-octave scales which approx- imate a 6:7:9 septimal minor triad, settling on 8, 13 and 18 equal divisions of the perfect fifth, then proposes three methods for modulating dynamically in between each scale.

PAPER SESSION VIII deepGTTM-II: Automatic Generation of Metrical Structure based on Deep Learning TechniqueMasatoshi Hamanaka, Keiji Hirata & Satoshi Tojo

This paper describes an analyzer that automatically generates the metrical structure of a generative theory of tonal music (GTTM). Although a fully automatic time-span tree analyzer has been developed, musicologists have to correct the errors in the metrical structure. In light of this, we use a deep learning technique for generating the metrical structure of a GTTM. Because we only have 300 pieces of music with the metrical structure analyzed by musicologist, directly learning the relationship between the score and metrical structure is difficult due to the lack of training data. To solve this problem, we propose a multidimensional multitask learning analyzer called deepGTM-II that can learn the relationship between score and metrical structures in the following three steps. First, we conduct unsupervised pre-training of a network using 15,000 pieces in a non-labeled dataset. After pre-training, the network involves supervised fine-tuning by back propagation from output to input layers using a half-labeled dataset, which consists of 15,000 pieces labeled with an automatic analyzer that we previously constructed. Finally, the network involves supervised fine-tuning using a labeled dataset. The experimental results demonstrated that the deepGTTM-II outperformed the previous analyzers for a GTTM in F-measure for generating the metrical structure.

Page 21: SMC - Ondiola · smc2016.net smc sound & cm ucsi ompnguti network conference s.t.r.e.a.m. festival conference guide abstract booklet smc hamburg/germany 31. –3.8

ABSTRACTS | S

ESSIONS

4140

became progressively more uncertain for decreasing consonance and when the stimuli were presented underfoot. Musicians’ labeling of the stimuli was incorrect when dissonant vibrotactile intervals were presented underfoot. Compared to existing literature on auditory, tactile and multisensory perception, our results reinforce the idea that vibrotactile musical consonance plays a perceptual role in both musicians and non-musicians. Might this role be the result of a process occurring at central and/or peripheral level, involving or not activation of the auditory cortex, concurrent reception from selective somatosensory channels, correlation with residual auditory information reaching the basilar membrane through bone conduction, is a question our preliminary exploration leaves open to further research work.

The selfear project: a mobile application for low-cost pinna-related transfer function acquisitionMichele Geronazzo, Jacopo Fantin, Giacomo Sorato, Guido Baldovino & Federico Avanzini

Virtual and augmented reality are expected to become more and more influential even in everyday life in the next future; the role of spatial audio technologies over headphones will be pivotal for application scenarios which involve mobility. This paper faces the issue of head-related transfer function (HRTF) acquisition with low-cost mobile devices, affordable to anybody, anywhere and possibly in a faster way than the existing measurement methods. In particular, the proposed solution, called the SelfEar project, focuses on capturing individual spectral features included in the pinna-related transfer function (PRTF) guiding the user in collecting non-anechoic HRTFs through a self-adjustable procedure. Acoustic data are acquired by an audio augmented reality headset which embedded a pair of microphones at listener ear-canals. The proposed measurement session captures PRTF spectral features of KEMAR mannequin which are consistent to those of anechoic measurement procedures. In both cases, the results would be dependent on microphone placement, minimizing subject movements which would occur with human users. Considering quality and variability of the reported results as well as the resources needed, the SelfEar project proposes an attractive solution for low-cost HRTF personalization procedure.

SPECIAL SESSION II Using Multidimensional Sequences For Improvisation In The OMax ParadigmKen Déguernel, Emmanuel Vincent & Gérard Assayag

Automatic music improvisation systems based on the OMax paradigm use training over a one-dimensional sequence to generate original improvisations. Different systems use different heuristics to guide the improvisation but none of these benefits from training over a multidimensional sequence. We propose a system creating improvisation in a closer way to a human improviser where the intuition of a context is enriched with knowledge. This system combines a probabilistic model taking into account the multidimensional aspect of music trained on a corpus, with a factor oracle. The probabilistic model is constructed by interpolating sub-models and represents the

SPECIAL SESSION I Virtual reconstruction of an ancient greek pan fluteFederico Avanzini, Sergio Canazza, Giovanni De Poli, Carlo Fantozzi, Edoardo Micheloni, Niccolò Pretto, Antonio Roda’, Silvia Gasparotto & Giuseppe Salemi

This paper presents ongoing work aimed at realizing an interactive museum installation that aids museum visitors learn about a musical instrument that is part of the exhibit: an exceptionally well preserved ancient pan flute, most probably of greek origins.

The paper first discusses the approach to non-invasive analysis on the instrument, which was based on 3D scanning using computerized tomography (CT scan), and provided the starting point to inspect the geometry and some aspects of the construction of the instrument. A tentative reconstruction of the instrument tuning is then presented, which is based on the previous analysis and on elements of theory of ancient Greek music.

Finally, the paper presents the design approach and the first results regarding the interactive museum installation that recreates the virtual flute and allows intuitive access to several related research facets.

Interaction with a large sized augmented string instrument intended for a public settingJimmie Paloranta, Anders Lundström, Ludvig Elblaus, Roberto Bresin & Emma Frid

In this paper we present a study of the interaction with a large sized string instrument intended for a large installation in a museum, with focus on encouraging creativity, learning, and providing engaging user experiences. In the study, nine participants were video recorded while playing with the string on their own, followed by an interview focusing on their experiences, creativity, and the functionality of the string. In line with previous research, our results highlight the importance of designing for different levels of engagement (exploration, experimentation, challenge). However, results additionally show that these levels need to consider the users’ age and musical background as these profoundly affect the way the user plays with and experiences the string.

An exploration on whole-body and foot-based vibrotactile sensitivity to melodic consonanceFederico Fontana, Ivan Camponogara, Matteo Vallicella, Marco Ruzzenente & Paola Cesari

Consonance is a distinctive attribute of musical sounds, for which a psychophysical explanation has been found leading to the critical band perceptual model. Recently this model has been hypothesized to play a role also during tactile perception. In this paper the sensitivity to vibrotactile consonance was subjectively tested in musicians and non-musicians. Before the test, both such groups listened to twelve melodic intervals played with a bass guitar. After being acoustically isolated, participants were exposed to the same intervals in the form of either a whole-body or foot-based vibrotactile stimulus. On each trial they had to identify whether an interval was ascending, descending or unison. Musicians were additionally asked to label every interval using standard musical nomenclature. The intervals identification as well as their labeling was above chance, but

Page 22: SMC - Ondiola · smc2016.net smc sound & cm ucsi ompnguti network conference s.t.r.e.a.m. festival conference guide abstract booklet smc hamburg/germany 31. –3.8

ABSTRACTS | S

ESSIONS

4342

introducing weights for edges, one can define affinities and dependencies in the complex and flexible structure that is a musical composition. Ways in which the all-incidence matrix of a graph with weighted edges can evolve are discussed including the use for that purpose of elements of Information Theory. The Emerging Composition model is closer to the way composers actually write music and refine their output; it also creates the equivalent of a live organism, growing, developing, and transforming itself over time.

POSTER SESSION I LyricListPlayer: A Consecutive-Query-by-Playback Interface for Retrieving Similar Word Sequences from Different Song LyricsTomoyasu Nakano & Masataka Goto

This paper presents LyricListPlayer, a music playback interface for an intersong navigation and browsing that enables a set of musical pieces to be played back by music zapping based on lyrics words. In other words, this paper proposes a novel concept we callconsecutive-query-by-playback, which is for retrieving similar word sequences during music playback by using lyrics words as candidate queries. Lyrics can be used to retrieve musical pieces from the perspectives of the meaning and the visual scene of the song. A user of LyricListPlayer can see time-synchronized lyrics while listening, can see word sequences of other songs similar to the sequence currently being sung, and can jump to and listen to one of the similar sequences. Although there are some systems for music playback and retrieval that use lyrics text or time-synchronized lyrics and there is an interface generating lyrics animation by using kinetic typography, LyricListPlayer provides a new style of music playback with lyrics navigation based on the local similarity of lyrics.

Synchronization in chains of van der pol oscillatorsAndreas Henrici & Martin Neukom

In this paper we describe some phenomena arising in the dynamics of a chain of coupled van der Pol oscillators, mainly the synchronisation of the frequencies of these oscillators, and provide some applications of these phenomena in sound synthesis.

The state of the art on the educational software tools for electroacoustic compositionAlessandro Anatrini

In the past twenty years technological development has led to an increasing interest in the employment of the information and communication technology (ICT) in music education. Research still indicates that most music teachers use technology to facilitate working in traditional composing contexts, such as score writing or MIDI keyboard sequencing, revealing a common and unhealthy conception of ICT as mere “toolkit” with limited application. Despite this, the exploration of the electroacoustic practices and their techniques, that are at the core of sound-based music, have led to valuable composition projects thanks to ad hoc created educational pieces of software.

knowledge of the system, while the factor oracle (structure used in OMax) represents the context. The results show the potential of such a system to perform better navigation in the factor oracle, guided by the knowledge on several dimensions.

Using EarSketch to Broaden Participation in Computing and MusicJason Freeman, Brian Magerko, Doug Edwards, Morgan Miller, Roxanne Moore & Anna Xambó

EarSketch is a STEAM learning intervention that com- bines a programming environment and API for Python and JavaScript, a digital audio workstation, an audio loop library, and a standards-aligned curriculum to teach in- troductory computer science together with music tech- nology and composition. It seeks to address the imbal- ance in contemporary society between participation in music-making and music-listening activities and a paral- lel imbalance between computer usage and computer programming. It also seeks to engage a diverse popula- tion of students in an effort to address long-standing is- sues with underrepresentation of women and minorities in both computing and music composition. This paper introduces the core motivations and design principles behind EarSketch, distinguishes the project from related computing and music learning interventions, describes the learning environment and its deployment contexts, and summarizes the results of a pilot study.

Exploring Moment-form in Generative MusicArne Eigenfeldt

Generative art is art created through the use of a system. A unique and distinguishing characteristic of generative artworks is that they change with each run of the system; in the case of generative music, a musical composition that re-explores itself, continually producing alternative versions. An open problem in generative music is large- scale structure: how can generative systems avoid creating music that meanders aimlessly, yet doesn’t require strict architectural forms into which it is forced inside? Moments is a generative installation that explores Moment-form, a term Stockhausen coined to describe (his) music that avoids directed narrative curves. Through the use of musebots – independent musical agents – that utilize a parameterBot to generate an overall template of “moments”, the agents communicate their intentions and coordinate conditions for collaborative machine composition.

Emerging Composition: Being and BecomingSever Tipei

Emerging Composition: Being and Becoming envisions a work in continuous transformation, never reaching an equilibrium, a complex dynamic system whose components permanently fluctuate and adjust to global changes. The process never produces a definitive version, but provides at any arbitrary point in time a plausible variant of the work – a transitory being. Directed Graphs are used to represent the structural levels of a composition (vertices) and the relationships between them (edges); parent-children and ancestor-descendant type connections describe well potential hierarchies in a piece of music. By determining adjacencies and degrees of vertices and

Page 23: SMC - Ondiola · smc2016.net smc sound & cm ucsi ompnguti network conference s.t.r.e.a.m. festival conference guide abstract booklet smc hamburg/germany 31. –3.8

ABSTRACTS | S

ESSIONS

4544

provide a tool for musical practice and education, areas where the old art of musical tunings and temperaments, with the notable exception of early music studies, appears to have long been neglected in favour of the practical advantages of equal temperament.

Emotion and soundscape preference rating using semantic differential pairs and the self-assessment manikinFrancis Stevens, Damian Murphy & Stephen Smith

This paper presents the findings of a soundscape preference rating study designed to assess the suitability of the self-assessment manikin (SAM) for measuring an individual‘s subjective response to a soundscape. The use of semantic differential (SD) pairs for this purpose is a well established method, but one that can be quite time consuming and not immediately intuitive to the non-expert. The SAM is a questionnaire tool designed for the measurement of emotional response to a given stimulus. Whilst the SAM has seen some limited use in a soundscape context, it has yet to be explicitly compared to the established SD pairs methodology. This study makes use of B-format soundscape recordings, made at a range of locations including rural, suburban, and urban environments, presented to test participants over a 16-speaker surround-sound listening setup. Each recording was rated using the SAM and set of SD pairs chosen following a survey of previous studies. Results show the SAM to be a suitable method for the evaluation of soundscapes that is more intuitive and less time-consuming than SD pairs.

A Faust Based Driving Simulator Sound Synthesis EngineRomain Michon, Chris Chafe, Nick Gang, Mishel Johns, Sile O’Modhrain, Matthew Wright, David Sirkin, Wendy Ju & Nikhil Gowda

A driver’s awareness while on the road is a critical factor in his or her ability to make decisions to avoid hazards, plan routes and maintain safe travel. Situational awareness is gleaned not only from visual observation of the environment, but also the audible cues the environment provides – police sirens, honking cars, and crosswalk beeps, for instance, alert the driver to events around them.

In our ongoing project on „investigating the influence of audible cues on driver situational awareness”, we implemented a custom audio engine that synthesizes in real time the soundscape of our driving simulator and renders it in 3D. This paper describes the implementation of this system, evaluates it and suggests future improvements. We believe that it provides a good example of use of a technology developed by the computer music community outside of this field and that it demonstrates the potential of the use of driving simulators as a music performance venue.

Vr ‘space opera’: mimetic spectralism in an immersive starlight audification systemBenedict Carey & Burak Ulas

This paper describes a system designed as part of an interactive VR opera, which immerses a real-time composer and an audience (via a network) in the historical location of Göbekli Tepe, in southern Turkey during an imaginary scenario set in the Pre-Pottery Neolithic period (8500–5500 BCE), viewed by some to be the earliest example of a temple, or observatory. In this environment music can be generated through user

In this paper I first give a short overview of the significant premises for an effective curriculum for middle and secondary education that can authentically include the electroacoustic techniques, then I summarise the state of the art in the development of the most significant educational software packages pointing out to possible future developments.

Sonification of Dark Matter: Challenges and OpportunitiesNuria Bonet, Alexis Kirke & Eduardo Miranda

A method for the sonification of dark matter simulations is presented. The usefulness of creating sonifications to accompany and complement the silent visualisations of the simulation data is discussed. Due to the size and complexity of the data used, a novel method for analyz-ing and sonifying the data sets is presented. A case is made for the importance of aesthetical considerations, for example musical language used. As a result, the sonifications are also musifications; they have an artistic value beyond their information transmitting value. The work has produced a number of interesting conclusions which are discussed in an effort to propose an improved solution to complex sonifications. It has been found that the use primary and secondary data parameters and sound mappings is useful in the compositional process. Finally, the possibilities for public engagement in science and music through audiences’ exposure to sonification is discussed.

Precision Finger Pressing Force Sensing in the Pianist-Piano InteractionMatthias Flückiger, Tobias Grosshauser & Gerhard Tröster

Playing style, technique and touch quality are essential for musical expression in piano playing. From a mechanical point of view, this is mainly influenced by finger pressing force, finger position and finger contact area size. To measure these quantities, we introduce and evaluate a new sensor setup suited for the in-depth investigation of the pianist-piano interaction. A strain gauge based load cell is installed inside a piano key to measure finger pressing force via deflection. Several prototypes of the finger pressing force sensor have been tested and the final sensor measures from 0 N to 40 N with a resolution smaller than 8 mN and a sample rate of 1000 Hz. Besides an overview of relevant findings from psychophysics research, two pilot experiments with a single key piano action model are presented to explore the capability of the force sensor and discuss applications.

Beatings: a web application to foster the renaissance of the art of musical temperamentsRui Penha & Gilberto Bernardes

In this article we present beatings, a web application for the exploration of tuning and temperaments which pays particular attention to auditory phenomena resulting from the interaction of the spectral components of a sound, and in particular to the pitch fusion and the amplitude modulations occurring between the spectral peaks a critical bandwidth apart. By providing a simple, yet effective, visualization of the temporal evolution of this auditory phenomena we aim to foster new research in the pursuit of perceptually grounded principles explaining Western tonal harmonic syntax, as well as

Page 24: SMC - Ondiola · smc2016.net smc sound & cm ucsi ompnguti network conference s.t.r.e.a.m. festival conference guide abstract booklet smc hamburg/germany 31. –3.8

ABSTRACTS | S

ESSIONS

4746

Factorsynth: a max tool for sound analysis and resynthesis based on matrix factorizationJuan Jose Burred

Factorsynth is a new software tool, developed in the Max environment, that implements sound processing based on matrix factorization techniques. In particular, Non-negative Matrix Factorization is applied to the input sounds, which produces a set of temporal and spectral components that can be then freely manipulated and combined to produce new sounds. Based on a simple graphical interface that visualizes the factorization output, Factorsynth aims at bringing the ideas of matrix factorization to a wider audience of composers and sound designers.

The loop ensemble – open source instruments for teaching electronic music in the classroomChristof Martin Schultz & Marten Seedorf

The electronic production, processing and dissemination of music is an essential part of the contemporary, digitalized music culture. Digital media play an important role for children and adolescents in their everyday handling of music. These types of media call for an active participation instead of mere reception and thus offer new ways of musical socialization. Despite their cultural relevance and being lively discussed in German music education, these aspects still are marginalized in the educational practice in German classrooms. In the context of the interdisciplinary research project 3DMIN, we developed the loop ensemble. It consists of three virtual instruments and is designed for the practical pedagogical dissemination of electronic music and its technical basics. The ensemble is released as an Open Educational Resource. We evaluated the instruments‘ usability in three ways. They were cross-checked with relevant ISO standards, three workshops were held and the participants interviewed and, finally, an accompanying analysis using the GERD model was performed, focusing gender and diversity aspects. The results show a distinct practical suitability of the ensemble, yet further empirical research is needed for a profound evaluation.

POSTERS SESSION II Nuance: Adding Multi-Touch Force Detection to the iPadRomain Michon, Julius Orion Iii Smith, Chris Chafe, Ge Wang &  Matthew James Wright

Nuance is a new device adding multi-touch force detection to the iPad touch screen. It communicates with the iPad using the audio jack input. Force information is sent at an audio rate using analog amplitude modulation (AM). Nuance provides a high level of sensitivity and responsiveness by only using analog components. It is very cheap to make.

Nuance has been developed in the context of a larger project on augmenting mobile devices towards the creation of a form of hybrid lutherie where instruments are based on physical and virtual elements.

interaction, where the harmonic material is determined based on observations of light variation from pulsating stars, that would have theoretically been overhead on the 1st of October 8000 BC at 23.00 and animal calls based on the reliefs in the temple. Based on theoretical observations of the stars V465 Per, HD 217860, 16 Lac, BG CVn, KIC 6382916 and KIC6462033, frequency collections were derived and applied to the generation of musical sound and notation sequences within a custom VR environment using a novel method incorporating spectralist techniques. Parameters controlling this ‘resynthesis’ can be manipulated by the performer using a Leap Motion controller and Oculus Rift HMD, yielding both sonic and visual results in the environment. The final opera is to be viewed via Google Cardboard and delivered over the Internet. This entire process aims to pose questions about real-time composition through time distortion and invoke a sense of wonder and meaningfulness through a ritualistic experience.

Primary-Ambient Extraction Using Adaptive Weighting and Principal Component AnalysisKarim M. Ibrahim & Mahmoud Allam

Most audio recordings are in the form of a 2-channel stereo recording while new playback sound systems make use of more loudspeakers that are designed to give a more spatial and surrounding atmosphere that is beyond the content of the stereo recording. Hence, it is essential to extract more spatial information from stereo recording in order to reach an enhanced upmixing techniques. One way is by extracting the primary/ambient sources. The problem of primary-ambient extraction (PAE) is a challenging problem where we want to decompose a signal into a primary (direct) and ambient (surrounding) source based on their spatial features. Several approaches have been used to solve the problem based mainly on the correlation between the two channels in the stereo recording. In this paper, we propose a new approach to decompose the signal into primary and ambient sources using Principal Component Analysis (PCA) with an adaptive weighting based on the level of correlation between the two channels to overcome the problem of low ambient energy in PCA-based approaches.

A Score-Informed Computational Description of Svaras Using a Statistical ModelSertan Şentürk, Gopala Krishna Koduri & Xavier Serra

Musical notes are often modeled as a discrete sequence of points on a frequency spectrum with possibly different interval sizes such as just-intonation. Computational descriptions abstracting the pitch content in audio music recordings have used this model, with reasonable success in several information retrieval tasks. In this paper, we argue that this model restricts a deeper understanding of the pitch content. First, we discuss a statistical model of musical notes which widens the scope of the current one and opens up possibilities to create new ways to describe the pitch content. Then we present a computational approach that partially aligns the audio recording with its music score in a hierarchical manner first at metrical cycle-level and then at note-level, to describe the pitch content using this model. It is evaluated extrinsically in a classification test using a public dataset and the result is shown to be significantly better compared to a state-of-the-art approach. Further, similar results obtained on a more challenging dataset which we have put together, reinforces that our approach outperforms the other.

Page 25: SMC - Ondiola · smc2016.net smc sound & cm ucsi ompnguti network conference s.t.r.e.a.m. festival conference guide abstract booklet smc hamburg/germany 31. –3.8

ABSTRACTS | S

ESSIONS

4948

previous phrase and the position of a phrase in music score effect expressive timing in a phrase. The expressive timing in the previous phrase prioritises the position of the phrase as the position of the phrase only impacts the choice of expressive timing with the consideration of expressive timing in the previous phrase.

This is an important message for Julie Wade: Emergent Performance Events in an Interactive InstallationBrent Lee

This is an important message for Julie Wade exists both as a multimedia performance piece and as an interactive audiovisual installation. Each version of the work was publicly presented between 2013 and 2015; a recent 2016 version seeks to incorporate elements of both of the earlier performance and installation versions. Currently, the work may be installed in a gallery space and controlled by Max/Jitter patches that randomly generate related audio and visual events; at the same time, this new installation remains open to interactive musical performance, responding to such interventions with extra layers of sound and revealing an extra set of video clips. This hybrid environment raises a number of ontological questions. In what ways are these musical interactions with the installation “performances”? Is a musician interacting with the installation a “performer” in the conventional sense, and does this interaction impose a different role on a casual gallery visitor witnessing this interaction? Can this mode of presentation of an audiovisual work transcend the limitations of conventional performance and installation? This paper explores these questions within the context of the evolution and 2016 presentation of This is an important message for Julie Wade.

Sonification as catalyst in training manual wheelchair operation for sports and everyday lifeAndreas Almqvist Gref, Ludvig Elblaus & Kjetil Falkenberg Hansen

In this paper, a study on sonification of manual wheelchair movements is presented. The aim was to contribute to both rehabilitation contexts and in wheelchair sports contexts, by providing meaningful auditory feedback for training of manual wheelchair operation. A mapping approach was used where key parameters of manual wheelchair maneuvering were directly mapped to different sound models. The system was evaluated with a qualitative approach in experiments. The results indicate that there is promise in utilizing sonification for training of manual wheelchair operation but that the approach of direct sonification, as opposed to sonification of the deviation from a predefined goal, was not fully successful. Participants reported that there was a clear connection between their wheelchair operation and the auditory feedback, which indicates the possibility of using the system in some, but not all, wheelchair training contexts.

The Hyper Hurdy-Gurdy/The Hyper ZampognaLuca Turchet

This paper describes a design for the Hyper-Zampogna, which is the augmentation of the traditional Italian zampogna bagpipe. The augmentation consists of the enhancement of the acoustic instrument with various microphones used to track the sound emission of the various pipes, different types of sensors used to track some of the player‘s gestures, as well as novel types of real-time control of digital effects. The placing of the added technology is not a hindrance to the acoustic use of the instrument and is conveniently located. Audio and sensors data processing is accomplished by an application coded in Max/MSP and running on an external computer. Such an application also allows for the use of the instrument as a controller for digital audio workstations. On the one hand, the rationale behind the development of such augmented instrument was to provide electro-acoustic zampogna performers with an interface capable of achieving novel types of musical expression without disrupting the natural interaction with the traditional instrument. On the other hand, this research aimed to provide composers with a new instrument enabling the exploration of novel pathways for musical creation.

Polytempo composer: a tool for the computation of synchronisable tempo progressionsPhilippe Kocher

The accurate synchronisation of tempo progressions is a compositional challenge. This paper describes the development of a method based on Bezier curves that facilitates the construction of musical tempo polyphonies up to an arbitrary level of complexity, and its implementation in a software tool. The motivation for this work is to enable and encourage composers to create music with different simultaneously varying tempos which otherwise would be too complex to manage.

SoundScavenger: An Interactive SoundwalkNaithan Bosse

SoundScavenger is an open-form, networked soundwalk composition for iOS devices. Embedded GPS sensors are used to track user positions within a series of regions. Each region is associated with a different series of soundfiles. The application supports networked interactions, allowing multiple users to explore and communicate within a semi-shared soundscape.

A model selection test on effective factors of the choice of expressive timing clusters for a phraseShengchen Li, Dawn Black, Mark Plumbley & Simon Dixon

Expressive timing for a phrase in performed classical music is likely to be effected by two factors: the expressive timing in the previous phrase and the position of the phrase within the piece. In this work, we present a model selection test for evaluating candidate models that assert different dependencies on deciding the Cluster of Expressive Timing (CET) for a phrase. We use cross entropy and Kullback Leibler (KL) divergence to evaluate the resulting models: with theses criteria we find that both the expressive timing in the

Page 26: SMC - Ondiola · smc2016.net smc sound & cm ucsi ompnguti network conference s.t.r.e.a.m. festival conference guide abstract booklet smc hamburg/germany 31. –3.8

ABSTRACTS | S

ESSIONS

5150

Wind chime sounds are used to sonify the patient‘s wrist motion, a three-dimensional parameter mapping was implemented. The system was evaluated in a qualitative pilot study with one therapist and five patients. The responses to the musical auditory feedback were different from patient to patient. Musical auditory feedback in therapy can encourage patients on one hand, on the other hand it can also be perceived disturbing or discouraging. From this observation we conclude that sound in electronic therapy aids in fields other than music therapy should be made optional. Multimodal electronic therapy aids, where sound can be toggled on and off, are possible applications.

Factors Influencing Vocal Pitch in Articulatory Speech Synthesis: A Study Using PRAATSivaramakrishnan Meenakshisundaram, Eduardo Miranda & Irene Kaimi

An extensive study on the parameters influencing the pitch of a standard speaker in articulatory speech synthesis is presented. The speech synthesiser used is the articulatory synthesiser in PRAAT. Categorically, the repercussion of two parameters: Lungs and Cricothyroid on the average pitch of the synthesised sounds are studied. Statistical analysis of synthesis data proclaims the extent to which each of the variables transforms the tonality of the speech signals.

Automatic musical instrument recognition in audiovisual recordings by combining image and audio classification strategiesOlga Slizovskaia, Emilia Gomez & Gloria Haro

The goal of this work is to incorporate the visual modality into a musical instrument recognition system. For that, we first evaluate state-of-the-art image recognition techniques in the context of music instrument recognition, using a database of about 20.000 images and 12 instrument classes. We then reproduce the results of state-of-the-art methods for audio-based musical instrument recognition, considering standard datasets including more than 9.000 sound excerpts and 45 instrument classes. We finally compare the accuracy and confusions in both modalities and we showcase how they can be integrated for audio-visual instrument recognition in music videos. We obtain around 0.75 F1-measure for audio and 0.77 for images and similar confusions between instruments. This study confirms that visual (shape) and acoustic (timbre) properties of music instruments are related to each other and reveals the potential of audiovisual music description systems and the fact.

Teaching Audio Programming with the Neonlicht-EngineJan-Torsten Milde

In this paper we describe the ongoing development of an efficient, easily programmable, scalable synthesizer engine: Neonlicht. Neonlicht serves as the basis for teaching in the field of audio programming as part of a bachelor‘s degree program in Digital Media.

Improvisation and gesture as form determinants in works with electronicsAlyssa Aska

This paper examines several examples that use electronics as form determinants in works with some degree of structured improvisation. Three works created by the author are discussed, each of which uses gestural controller input to realize an indeterminate form in some way. The application of such principles and systems to venues such as networked performance is explored. While each of these discussed works contains an improvisatory and/or aleatoric element, much of their content is composed, which brings the role of the composer into question. The “improviser”, who in these works advances the work temporally and determines the overall form, is actually taking on the more familiar role of the conductor. Therefore, these works also bring up important conversation topics regarding performance practice in works that contain electronics and how they are realized.

Composition Identification in Ottoman-Turkish Makam Music Using Transposition-Invariant Partial Audio-Score AlignmentSertan Şentürk & Xavier Serra

The composition information of audio recordings is highly valuable for many tasks such as music auto-tagging, music discovery and recommendation. Given a music collection, two typical scenarios are retrieving the composition(s) performed in an audio recording and retrieving the audio recording(s), where a composition is performed. These tasks are challenging in many music traditions, where the musicians have a vast freedom of interpretation. We present a composition identification methodology for such a music culture, in a music collection consisting of audio recordings and music scores. Our methodology first attempts to align the music score of a composition partially with an audio recording by using either Hough transform or subsequence dynamic time warping (SDTW). Next, it computes a similarity from the alignment, which indicates the likelihood of the audio having a performance of this composition. By repeating this procedure over all queries (scores or recordings depending on the retrieval task) we obtain similarity-values between the document (score or recording) and each query. Finally, the queries emitting high similarities are selected by a simple approach using logistic regression. We evaluate our methodology on a dataset of Ottoman-Turkish classical makam music. Our methodology achieves 0.95 mean average precision (MAP) for both composition retrieval and performance retrieval tasks using optimal parameters.

Musical sonification in electronic therapy aids for motor-functional treatment – a smartphone approachBenjamin Stahl & Iohannes Zmölnig

This work presents a system which uses an Android smartphone to measure the wrist motion of patients in ergotherapy and creates musical sounds out of it, which can make exercises more attractive for patients. The auditory feedback is used in a bimodal context (together with visual feedback on the smartphone‘s display). The underlying concept is to implement a therapy aid that transports the principles of music therapy to motor-functional therapy using a classical sonification approach and to create an electronic instrument in this way.

Page 27: SMC - Ondiola · smc2016.net smc sound & cm ucsi ompnguti network conference s.t.r.e.a.m. festival conference guide abstract booklet smc hamburg/germany 31. –3.8

ABSTRACTS | S

ESSIONS

5352

Sound forest/ljudskogen: a large-scale string-based interactive musical instrumentRoberto Bresin, Ludvig Elblaus, Emma Frid, Federico Favero, Lars Annersten, David Berner & Fabio Morreale

In this paper we present a string-based, interactive, large-scale interface for musical expression that will constitute the main element of an installation for a new museum dedicated to performing arts, Scenkonstmuseet, which will be inaugurated in 2017 in Stockholm, Sweden. The installation will occupy an entire room that measures 10x5 meters. A key concern is to create a digital musical instrument (DMI) that facilitates intuitive musical interaction, thereby enabling visitors to quickly start creating music either alone or together. The interface should be able to serve as a pedagogical tool; visitors should be able to learn about concepts related to music and music making by interacting with the DMI. Since the lifespan of the installation will be approximately five years, one main concern is to create an experience that will encourage visitors to return to the museum for continued instrument exploration. In other words, the DMI should be designed to facilitate long-term engagement. An important aspect in the design of the installation is that the DMI shall be accessible and provide a rich experience for all museum visitors, regardless of age or abilities.

A Virtual Acousmonium for Transparent Speaker SystemsElliot Kermit-Canfield

An acousmonium, or loudspeaker orchestra, is a system of spatially-separated loudspeakers designed for diffusing electroacoustic music. The speakers in such a system are chosen based on their sonic properties and placed in space with the intention of imparting spatial and timbral effects on the music played through them. Acousmonia are in fact musical instruments that composers and sound artists use in concerts to perform otherwise static tape pieces. Unfortunately, acousmonia are large systems that are challenging to maintain, upgrade, transport, and reconfigure. Additionally, their sole task is limited to the diffusion of acousmatic music. On the other hand, most computer music centers have incorporated multichannel sound systems into their studio and concert setups. In this paper, we propose a virtual acousmonium that decouples an arbitrary arrangement of virtual, colored speakers from a transparent speaker system that the acousmonium is projected through. Using ambisonics and an appropriate decoder, we can realize the virtual acousmonium on almost any speaker system. Our software automatically generates a GUI for metering and OSC/MIDI responders for control, making the system portable, configurable, and simple to use.

Rhythm transcription of polyphonic midi performances based on a merged-output hmm for multiple voicesEita Nakamura, Kazuyoshi Yoshii & Shigeki Sagayama

This paper presents a statistical method of rhythm transcription that, given a polyphonic MIDI performance (e.g. piano) signal, simultaneously estimates the quantised durations (note values) and the voice structure of the musical notes, as in music scores. Hidden Markov models (HMMs) have been used in rhythm transcription to combine a model for music scores and a model describing the temporal fluctuations in music performances. Conventionally, for polyphonic rhythm transcription, a polyphonic score is represented as

Lazy Evaluation in Microsound SynthesisHiroki Nishino & Adrian Cheok

The microsound synthesis framework in the LC computer music programming language developed by Nishino integrates objects and library functions that can directly represent microsounds and related manipulations for microsound synthesis. Together with the seamless collaboration mechanism with the unit-generator-based sound synthesis framework, such abstraction can help provide a simpler and terser programming model for various microsound synthesis techniques. However, while the framework can achieve the practical real-time sound synthesis performance in general, it was observed that the temporal suspension in sound synthesis can occur, when a very large microsound object beyond microsound time-scale is manipulated, missing the deadline for real-time sound synthesis. Such an issue may not be desirable when considering more general applications beyond microsound synthesis.

In this paper, we describe our solution to this problem. By lazily evaluating microsound objects, the computation is delayed until when the samples are actually needed (e.g., for the DAC output), and only the amount of samples required at the point is computed; thus, the temporal suspension in real-time sound synthesis can be avoided by distributing the computation among the DSP cycles.

POSTERS SESSION III Speculative Digital Sound SynthesisHiroki Nishino & Adrian D. Cheok

In this paper, we propose a novel technique, speculative digital sound synthesis. Our technique first optimistically assumes that there will be no change to the control parameters for sound synthesis and compute by audio vectors at the beginning of a DSP cycle, and then recomputes only the necessary amount of the output when any change is made in the same cycle after the speculation. As changes to control parameters are normally quite sporadic in most situations, the recomputation is rarely performed. Thus, the computational efficiency can be maintained mostly equivalent to the computation by audio vectors, without any speculation when no changed is made to control parameters. Even when any change is made, the additional overhead can be minimized since the recomputation is only applied to those sound objects that had their control parameters updated.

Thus, our speculative digital sound synthesis technique can provide both better performance efficiency by the audio vectors and sample-rate accurate control in sound synthesis at once; in other words, we provided a practical solution to one of the most well-known long-standing problems in computer music software design.

Page 28: SMC - Ondiola · smc2016.net smc sound & cm ucsi ompnguti network conference s.t.r.e.a.m. festival conference guide abstract booklet smc hamburg/germany 31. –3.8

ABSTRACTS | S

ESSIONS

5554

Towards a Virtual-Acoustic String InstrumentSandor Mehes, Maarten van Walstijn & Paul Stapleton

In acoustic instruments, the controller and the sound producing system often are one and the same object. If virtual-acoustic instruments are to be designed to not only simulate the vibrational behaviour of a real-world counterpart but also to inherit much of its interface dynamics, it would make sense that the physical form of the controller is similar to that of the emulated instrument. The specific physical model configuration discussed here reconnects a (silent) string controller with a modal synthesis string resonator across the real and virtual domains by direct routing of excitation signals and model parameters. The excitation signals are estimated in their original force-like form via careful calibration of the sensor, making use of adaptive filtering techniques to design an appropriate inverse filter. In addition, the excitation position is estimated from sensors mounted under the legs of the bridges on either end of the prototype string controller. The proposed methodology is explained and exemplified with preliminary results obtained with a number of off-line experiments.

Melody extraction based on a source-filter model using pitch contour selectionJuan J. Bosch & Emilia Gomez

This work proposes a melody extraction method which combines a pitch salience function based on source-filter modelling with melody tracking based on pitch contour selection. We model the spectrogram of a musical audio signal as the sum of the leading voice and accompaniment. The leading voice is modelled with a Smoothed Instantaneous Mixture Model (SIMM), and the accompaniment is modelled with a Non-negative Matrix Factorization (NMF). The main benefit of this representation is that it incorporates timbre information, and that the leading voice is enhanced, even without an explicit separation from the rest of the signal. Two different salience functions based on SIMM are proposed, in order to adapt the output of such model to the pitch contour based tracking. Candidate melody pitch contours are then created by grouping pitch sequences, using auditory streaming cues. Finally, melody contours are selected using contour characteristics and smoothness constraints.

An evaluation on a large set of challenging polyphonic music material, showed that the proposed salience functions helps increasing the salience of melody pitches in comparison to similar methods. The complete melody extraction method is also compared against related state-of-the-art approaches, achieving a higher overall accuracy when evaluated on both vocal and instrumental music.

An Online Tempo Tracker for Automatic Accompaniment based on Audio-to-audio Alignment and Beat TrackingGrigore Burloiu

We approach a specific scenario in real-time performance following for automatic accompaniment, where a relative tempo value is derived from the deviation between a live target performance and a stored reference, to drive the playback speed of an accompaniment track. We introduce a system which combines an online alignment process with a beat tracker. The former aligns the target performance to the reference

a linear sequence of chords and models for monophonic performances are simply extended to include chords (simultaneously sounding notes). A major problem is that this extension cannot properly describe the structure of multiple voices, which is most manifest in polyrhythmic scores, or the phenomenon of loose synchrony between voices. We therefore propose a statistical model in which each voice is described with an HMM and polyphonic performances are described as merged outputs from multiple HMMs that are loosely synchronous. We derive an efficient Viterbi algorithm that can simultaneously separate performed notes into voices and estimate their note values. We found that the proposed model outperformed previously studied HMM-based models for rhythm transcription of polyrhythmic performances.

Visually Representing and Interpreting Multivariate Data for Audio MixingJosh Mycroft, Josh Reiss & Tony Stockman

The majority of Digital Audio Workstation designs represent mix data using a channel strip metaphor. While this is a familiar design based on physical mixing desk layout, it can lead to a visually complex interface incorporating a large number of User Interface objects which can increase the need for navigation and disrupt the mixing workflow. Within other areas of data visualisation, multivariate data objects such as glyphs are used to simultaneously represent a number of parameters within one graphical object by assigning data to specific visual variables. This can reduce screen clutter, enhance visual search and support visual analysis and interpretation of data. This paper reports on two subjective evaluation studies that investigate the efficacy of different design strategies to visually encode mix information (volume, pan, reverb and delay) within a stage metaphor mixer using multivariate data objects and a channel strip design using faders and dials. The analysis of the data suggest that compared to channels strip designs, multivariate objects can lead to quicker visual search without any subsequent reduction in search accuracy.

Adapting a Computational Multi Agent Model for Humpback Whale Song Research for use as a Tool for Algorithmic CompositionMichael Mcloughlin, Luca Lamoni, Ellen Garland, Simon Ingram, Alexis Kirke, Michael Noad, Luke Rendell & Eduardo Miranda

Humpback whales (Megaptera Novaengliae) present one of the most complex displays of cultural transmission amongst non-humans. During the breeding season, male humpback whales create long, hierarchical songs, which are shared amongst a population. Every male in the popu- lation conforms to the same song in a population. During the breeding season these songs slowly change and the song at the end of the breeding season is significantly different from the song heard at the start of the breeding season. The song of a population can also be replaced, if a new song from a different population is introduced. This is known as song revolution. Our research focuses on building computational multi agent models, which seek to recreate these phenomena observed in the wild. Our research relies on methods inspired by computational multi agent models for the evolution of music. This inter- disciplinary approach has allowed us to adapt our model so that it may be used not only as a scientific tool, but also a creative tool for algorithmic composition. This paper discusses the model in detail, and then demon- strates how it may be adapted for use as an algorithmic composition tool.

Page 29: SMC - Ondiola · smc2016.net smc sound & cm ucsi ompnguti network conference s.t.r.e.a.m. festival conference guide abstract booklet smc hamburg/germany 31. –3.8

ABSTRACTS | S

ESSIONS

5756

Developing a parametric spatial design framework for digital drummingJeremy Ham & Daniel Prohasky

This research operates at the intersection of music and spatial design within the context of improvised digital drumming. We outline a creative design research project founded on the generation of a large body of improvised drum output with the intention of identifying a set of ‘referent (Pressing 1987)’ improvisations, patterns and phrases within the phenomenology of improvisation. We outline the development of a parametric computational framework using software from the spatial design industry to provide affordance (Gibson 1979) to understanding the complexities of drum improvisation.

The ‘ImprovSpace’ Grasshopper script, operating within Rhino3D enables the 3D spatialization of digital drum-based improvisations wherein the parameters of drum notes, duration and velocity all can be flexibly manipulated. Drum phrases and patterns can be compared individually and clusters of repeated elements can be found within a larger corpus of improvisations. The framework enables insights into the specific attributes that constitute individual style including playing behind and ahead of the beat, microtiming, rubato and other elements. It is proposed that, by bringing these improvisations into visual and spatial domain in plan, elevation and isometric projections, a theoretic musico-perspectival hinge may be deconstructed. This may provide insights for non-reading, visually and spatially dominant musicians within reflective, educational and other contexts.

without resorting to any symbolic information. The latter utilises the beat positions detected in the accompaniment, reference and target tracks to (1) improve the robustness of the alignment-based tempo model and (2) take over the tempo computation in segments when the alignment error is likely high. While other systems exist that handle structural deviations and mistakes in a performance, the portions of time where the aligner is attempting to find the correct hypothesis can produce erratic tempo values. Our proposed system, publicly available as a Max/MSP external object, addresses this problem.

Tsam: a tool for analyzing, modeling, and mapping the timbre of sound synthesizersStefano Fasciani

Synthesis algorithms often have a large number of adjustable parameters that determine the generated sound and its psychoacoustic features. The relationship between parameters and timbre is valuable for end users, but it is generally unknown, complex, and difficult to analytically derive. In this paper we introduce a strategy for the analysis of the sonic response of synthesizers subject to the variation of an arbitrary set of parameters. We use an extensive set of sound descriptors which are ranked using a novel metric based on statistical analysis. This enables study of how changes to a synthesis parameter affect timbre descriptors, and provides a multidimensional model for the mapping of the synthesis control through specific timbre spaces. The analysis, modeling and mapping are integrated in the Timbre Space Analyzer & Mapper (TSAM) tool, which enables further investigation on synthesis sonic response and on perceptually related sonic interactions.

CAMeL: Carnatic Percussion Music Generation Using N-Gram ModelsKonstantinos Trochidis, Carlos Guedes, Akshay Anantapadmanabhan & Andrija Klaric

In this paper we explore a method for automatically generating Carnatic style rhythmic. The method uses a set of annotated Carnatic percussion performances to generate new rhythmic patterns. The excerpts are short percussion solo performances in ādi tāla (8 beat-cycle), performed in three different tempi (slow/moderate/fast). All excerpts were manually annotated with beats, downbeats and strokes in three different registers – Lo-Mid-Hi. N-gram analysis and Markov chains are used to model the rhythmic structure of the music and determine the progression of the generated rhythmic patterns. The generated compositions are evaluated by a Carnatic music percussionist through a questionnaire and the overall evaluation process is discussed. Results show that the system can successfully compose Carnatic style rhythmic performances and generate new patterns based on the original compositions.

Page 30: SMC - Ondiola · smc2016.net smc sound & cm ucsi ompnguti network conference s.t.r.e.a.m. festival conference guide abstract booklet smc hamburg/germany 31. –3.8

OVERVIEW | C

ONCER

TS

5958

WEDNESDAYAUGUST 31

ALEXANDER SCHUBERT – PORTRAIT CONCERTKAMPNAGEL, HALL K220 .00–21 .30

Hello (2014) for clarinet, piano, percussion, electric zither, cello and electronics

Your Fox’s A Dirty Gold (2011) for voice solo and electronics

F1 (2016) for clarinet, piano, percussion, electric zither, violin, electric guitar, cello and electronics

Point Ones (2012) for clarinet, piano, percussion, violin, electric guitar, cello, electronics and conductor

Sensate Focus (2014) for clarinet, percussion, electric zither, cello and electronics

Performed by: DECODER Ensemble

SHOWCASE HAMBURG CURATED BY VAMHKAMPNAGEL, HALL KMH21 .30–23 .45

Asmus Tietchens: <Solo ohne Titel>

Birgit Uhler & Gregory Büttner: Araripepipra

Krachkistenorchester: Krachkisten

Intermission

Martin von Frantzius: Violin Sidetracks

Duo C: Music for Hippos

John Eckardt & Katrin Bethge: visual bassic

REAL-TIME COMPOSITION, IMPROVISATION & ANIMATEDSCORE IKAMPNAGEL, HALL K220 .00–21 .30

Gero Koenig: Chordeograph. ResonanceGrid I (2016).Chordeograph solo

Arne Eigenfeldt: Unnatural Selection. Mvt 2: Too Much Beauty is Before You (2014)

Ryan Ross Smith: Study No. 55 (2016)

David Kim-Boyle: Point Studies No. 6 (2016)

Nicolas Collins: Roomtone Variations (2013–14)

Gero Koenig: Chordeograph. ResonanceGrid II (2016). Quintet

Performers: RADAR-Ensemble, TonArt-Ensemble and others

SCORE IIKAMPNAGEL, HALL KMH21 .30–23 .00

Sascha Lino Lemke: aKKORDeONoff

Cat Hope: Pure (2014)

Gil Dori: Desert Stroll (2013)

Alexander Sigman: fcremapno (2014)

Pedro Louzeiro: Comprovisação no. 5 (2016)

Jason Freeman: Shadows (2015)

Se-Lien Chuang & Andreas Weiler: Momentum TonArt_SMC (2016)

Performers: RADAR-Ensemble, TonArt-Ensemble, Bernhard Fograscher and others

LUNCH CONCERT IHAW, PRODUKTIONSLABOR12 .00–13 .00

Francesco Maria Paradiso: Profili d’onda for prepared Vibraphone and Tape Vibraphone: Lin Chen

Leah Reid: Ring, Resonate, Resound for Tape

Shelly Knotts, Holger Ballweg, Jonas Hummel: Flock for 3 Live Coding Laptopists

Shelly Knotts, Holger Ballweg, Jonas Hummel: Laptops

Seth Shafer: Hookean Elastics for Tape

Chikashi Miyama: Modulations for Self-built Instrument with Interactive video

LUNCH CONCERT IIHAW, FORUM13 .30–14 .30

Gerriet K. Sharma: XXXXXXX for tape

Huw McGregor: Metronic for tape

Francesco Galante: Itineraires (pour edgar Varèse) for tape

Leonardo “Leo” Cicala: Khoisan for tape

CONCERT/INSTALLATION: MAELSTROMHAW, GARDEN17 .30

THURSDAYSEPTEMBER 1

S.T.R.E.A.M. FESTIVALFeatured composers: John Chowning, Nicolas Collins, Jason Freeman, Cat Hope, Ryan Ross Smith, Alexander Schubert and many more

Concerts from electronic and improvised music to real-time composition and animated scores, curated by DEGEM, VAMH and an international expert jury. Copeco final concert.

Ensembles: Decoder, RADAR, TonArt, Lin Chen, Carola Schaal and others

CONCERT VENUES

Kampnagel Jarrestraße 20, 22303 HamburgTelephone: +49 / 40 / 27 09 49 49www.kampnagel.de

HAWUniversity of Applied SciencesFinkenau 35, 22081 HamburgTelephone: +49 / 40 / 428 75-0www.haw-hamburg.de

Further Information on page XX

Page 31: SMC - Ondiola · smc2016.net smc sound & cm ucsi ompnguti network conference s.t.r.e.a.m. festival conference guide abstract booklet smc hamburg/germany 31. –3.8

OVERVIEW | C

ONCER

TS

6160

FRIDAYSEPTEMBER 2

JOHN CHOWNING & FRIENDSKAMPNAGEL, K220 .00–21 .30

John Chowning: Turenas (1972) for tape

Manfred Stahnke: Partch Harp (1987/89) for microtonal harp and DX 7 II

Jean-Claude Risset: Nature Contre Nature (1996) for percussion and tape

Constantin Basica: Tell Me What to Do (2013) for clarinetist, conductor, Kinect, live electronics, and live video

Georg Hajdu: Beyond the Horizon (2007) for 2 BP clarinets and synthesizer

John Chowning: Voices (2011) for voice and electronics

Performers: Maureen Chowning, Carola Schaal, Nora-Louise Müller, Lin Chen, Marcia Lemke-Kern, Andrei Koroliov, Gesine Dreyer, Manfred Stahnke and others

CONCERT-PARTY BY JACOB SELLOKAMPNAGEL, K422 .00–00 .00

SATURDAYSEPTEMBER 3

LUNCH CONCERT VHAW, PRODUKTIONSLABOR13 .15–14 .15

Artemi-Maria Gioti: Magnetic fields (2016) for electromagnetically augmented piano

Gerriet K. Sharma: grrawe (2013) for tape

Richard Hoadley: Calder’s flute for flute and computer

Constantin Basica: Transhumant for fixed media

Intermission

Jones Margarucci: Inhabited Places_part II (Behind Two Holes) (2015) for tape

Leng Censong: Jiang Xue (2014) for Yang Qin, 8 channels audio and Video

Ayako Sato: August, blue colored green (2015) for tape

Takuto Fukuda: Beyond the eternal chaos (2014) for flute and electronics

Wilfried Jentzsch: Particle World for fixed media

COPECO CONCERT IKAMPNAGEL, HALL K218 .00–20 .00

COPECO CONCERT IIKAMPNAGEL, HALL KMH20 .30–22 .30

LUNCH CONCERT IIIHAW, PRODUKTIONSLABOR12 .15–13 .00

Johannes Kreidler: Two pieces for clarinet and video

Andrej Koroliov: irritate me (Herbst) for clarinet, video and distortion pedal

Dong Zhou: Schizophonic (2016) for clarinet, electronics, live video

Yu-Chung Tseng: Rhapsody for clarinet and tape

Samuel Penderbayne: MY NEW CROSS OVER-CLASSICAL EP!!! (2016) for clarinet & Max4Live

Clarinet: Carola Schaal

LUNCH CONCERT IVHAW, FORUM13 .30–14 .30

Hiromi Ishii: Ryojin-fu (2013) for tape

Clemens von Reusner: Definierte Lastbedingung (2016) for tape

Takuto Fukada: Assimilation (2016) for tape, double bass and live electronics

Savannah Agger: Undercurrents (2016) for tape

Paul Hauptmeier: sines, crickets and a few (2016) for tape

Julia Mihàly: XXXXXXX (2016)

Page 32: SMC - Ondiola · smc2016.net smc sound & cm ucsi ompnguti network conference s.t.r.e.a.m. festival conference guide abstract booklet smc hamburg/germany 31. –3.8

PROGRAM N

OTES | C

ONCER

TS

6362

Above those technical considerations the piece tries to play with the vocabulary of the conductor and the anticipations and traditions connected to those gestures. It is not always predictable what the result of the conductors movement will be.

The piece is part of a sequence of sensor-based works like e.g. “Your Fox’s, A Dirty Gold” and “Laplace Tiger”, in which different interaction concepts with augmented instruments and performance codes are dealt with.

Sensate Focus (2014) for clarinet, percussion, electric zither, cello and electronics

Cats were reared in a light-tight box in which the only source of illumination was a 9-psec strobe flash every 2 sec. This allowed them to experience visual form but they did not experience visual movement. Most sampled signals are not simply stored and reconstruct- ed. But the fidelity of a theoretical reconstruction is a customary measure of the effective- ness of sampling. Sensate focusing is a term usually associated with a set of specific sexual exercises for couples or for individuals. Each participant is encouraged to focus on their own varied sense experience, rather than to see orgasm as the sole goal of sex.

Performed by: DECODER Ensemble

WEDNESDAY, AUGUST 31SHOWCASE HAMBURG CURATED BY VAMHKAMPNAGEL, HALL KMH | 21 .30–23 .45

Asmus Tietchens<Solo ohne Titel>

Birgit Uhler & Gregory BüttnerAraripepipra

Gregory Büttner plays sounds from a computer through a small loudspeaker, which Birgit Ulher uses as mutes for her trumpet. This way the trumpet sounds and the electroacoustic sounds are modulated by the acoustic resonance chamber of the trumpet. The trumpet functions here simultaneously as transmitter and a receiver.

Gregory Büttner plays his electro acoustic music through various external loudspeakers. At the same time these sounds are modulated by resonance chambers (everyday objects like plastic cups) and vibrating objects, which are placed on the speakers. Birgit Ulher uses extended loudspeakers, fed with radio noise in her trumpet mutes. The trumpet functions as an acoustic chamber and modulates the radio noise, thus the trumpet is transmitter and receiver at the same time. She also uses metal sheets as vibrating objects. By varying the pressure on the metal sheets, which are held against the trumpet bell, she creates multiphonics and splitting sounds.

Birgit Ulher: trumpet, loudspeaker, mutesGregory Büttner: computer

CONCERTSWEDNESDAY, AUGUST 31ALEXANDER SCHUBERT – PORTRAIT CONCERTKAMPNAGEL, HALL K2 | 20 .00–21 .30

Hello (2014)for clarinet, piano, percussion, electric zither, cello and electronics

Hello is an audio-visual piece in which the projection serves as a score to be interpreted by the ensemble. The video consists of gestures performed by the composer in his living room. The piece comes in eight movements and is an invitation into the personal world of Alexander Schubert. Please enjoy.

Your Fox’s A Dirty Gold (2011)for voice solo and electronics

“Fox” is what you could describe as a modern pop song (a love song actually). It incorporates elements of contemporary and experimental electronic music in the domain of pop music. The concept is to link all involved elements to the movement and gestures of the performer. This allows the singer to control, trigger and shape in time all technical and musical parts of the composition in real time. The software MAX/MSP is used to control the live electronics and the DMX lights. They are driven by the upper body movement of the singer and the electric guitar interface.

The aim of this technical concept is to establish an embodiment of the involved electronic apexes of the piece in order to make it perceivable and controllable like a regular acoustic instrument.

F1 (2016) for clarinet, piano, percussion, electric zither, violin, electric guitar, cello and electronics

Point Ones (2012) for clarinet, piano, percussion, violin, electric guitar, cello, electronics and augmented conductor

In “Point Ones” the conductor is equipped with motion sensors and through this is able to conduct both the ensemble and the live-electronics. Most of the piece is not realized with traditional conducting but with cue gestures, that mark beginning of new passages – hence the title “Point Ones”. The aim is to be able to experience the live-electronics in an embodied way and to create a fully controllable instrument for the conductor. Because of that the piece does not use a click track or other timeline-based fixed approaches.

Page 33: SMC - Ondiola · smc2016.net smc sound & cm ucsi ompnguti network conference s.t.r.e.a.m. festival conference guide abstract booklet smc hamburg/germany 31. –3.8

PROGRAM N

OTES | C

ONCER

TS

6564

THURSDAY, SEPTEMBER 1LUNCH CONCERT IHAW, PRODUKTIONSLABOR | 12 .00–13 .00

Francesco Maria ParadisoProfili d’onda for prepared Vibraphone and Tape

Profili d’onda. The title of the composition refers to the relation constructed in the work with the definite and symmetric motion of the most common forms of sound wave: the Sine wave, Square, Triangle, Saw-tooth wave. The graphic representation of four waveforms is used to build the profile of dynamics into three main parts: the graphic panels of the composition. Its main purposes have been: new sound-colours on Vibes, synthesis of more complex timbres with the addition of definite and indefinite pitch instruments (Crotales and Cowbells), new effects in connection with unusual or new techniques of performance, form and notation of the composition. The compositional procedure is mainly concentrated to define the instructions for the performance. The resultant among the choices of the composer and the activities of the performer gives form to the composition. The notation of the three main parts as well as the musical thought is “writing action”. The gestural notation, graphic-spatial, is a conscious choice. The composition consists of three main blocks: three main panels with text and graphic-spatial notation. One more part, in the middle of the composition, contains three episodes in closed form and ordinary notation. Each panel presents: one diagram divided into sections for the dynamic scale and temporal proportion, a series of symbols, icons above the diagram, texts which are related with the performer’s activity. The first panel ‘(A) Sine waves’ is divided into two diagrams or main sections. In the first one – divided into three parts – the performer plays behind or in front of Vibes (not normal position) and acts on nylon strings above the metal bars (just the “black” keys) of Vibes; in the second diagram divided into four parts the performer plays behind Vibes (ordinary position) and acts only on metal bars. The second panel ‘(B) Square waves’ consists of one diagram divided into nine sections. After the panel (B) three episodes are proposed. Only one episode is performed before the third block. The third panel ‘(C) Triangle waves’ is divided into three sections. The first section ‘(C) Triangle waves’ consists of two parts; second section ‘(A) Sinus waves’ of one part (the performer plays in front of Vibes and on strings above the metal bars); the third section ‘(D) Saw-tooth waves’ is divided into two parts. In each panel of the composition the line of the dynamic is modulated by the graphic representation of the characteristic motion of each waveform. The time line (the duration of each section) is, in proportion, symbolically identified through the space graphically expressed. The composer has provided a series of tables of musical materials. The tables of material contain: scales, arpeggios, chords, etc. to be used for the activity performance of each panel. The musical material of the tables is derived from the 1st Episode of the middle part of the piece. The tables of material are in aleatory notation. The order and the direction of reading (original, retrograde, inverse etc.) of material, or the way to combine different musical figures is left freely at the performer. The electronics has been composed developing the characteristic of each panel. In the panel A, Sine waves (first block) the electronics was realized using sine waves in additive synthesis and filters for the development of more complex timbres. Following the same way the electronic part for other panels has been realized.

Lin Chen: vibraphone

KrachkistenorchesterKrachkisten

Martin von FrantziusViolin Sidetracks

Violin Sidetracks is part of my “sidetracks” series started in 2013 with marimba sidetracks. All “sidetracks” pieces are composed for a solo instrument and an at least one interactive counterpart. In violin sidetracks the solo violin even has two counterparts: the live-electronics and five DMX controlled floodlights.

The more realtime interaction capabilities a software offers, the more it gets complex and sensitive to errors. In violin sidetracks a lot of such processes get together: Among other live-electronic effects, a live-sampling system captures single violin-sounds for use with a rhythm machine. Driven by gestures of the performer or through sound-analysis regarding gain and pitch, the DMX light system reacts in realtime, too.

This complexity and its recurrent problems led to the idea to bring a kind of “struggle with technology” on stage, where live-electronics and DMX lights play roles of two tough-minded characters who don’t always do what the performer wants them to. Like in ancient chamber music, all characters (live-performer, live-electronics and lights) get in competition with each other.

The piece is solely controlled with a Microsoft Kinect for Xbox 360 sensor. All electronic sounds are live-electronically treated violin sounds or live-recorded sounds, which are

“rearranged” in realtime – no prerecordings are used.

Duo CMusic for Hippos

C was formed by F#X/E.K.G. and Nika Son in late 2014 when they were given the chance to create an electro- acoustic space in a gallery room in Hamburg. Continuously working together since, they compose musical sculptures and collages that are not easily pigeonholed. They use a wide variety of analogue sound generators, from tape-machines, filters, selfmade physical resonators, synthesizers, drum machines as well as the human voice. During the process of development, almost every element undergoes several stages of deformation, inventing denatured structures and spaces that can soothe and disturb at the same time. Spacial textures, granular synthesis, abstract percussion, frozen sound, ripped voices. Their first EP will be released in autumn this year on the Hamburgbased label VIS.

www.cwelle.org

John Eckardt & Katrin Bethgevisual bassic

Page 34: SMC - Ondiola · smc2016.net smc sound & cm ucsi ompnguti network conference s.t.r.e.a.m. festival conference guide abstract booklet smc hamburg/germany 31. –3.8

PROGRAM N

OTES | C

ONCER

TS

6766

Humans and agents alike become ensnared in a chaotic game of cat and mouse as the clarity of whether the human input to the system is convincing the AI society to flock to their musical proposal, or the humans are rather chasing the various preferences of the agents to win votes, becomes blurred. The humans can’t predict exactly how agents will react or move within the network. In order to win votes the humans can aim for mass appeal with relatively neutral proposals or try to find a radical niche which strongly differentiates them from other performers.

Although it could be argued that the audio input into the voting mechanism could be any audio we propose that levels of abstraction and interpretation present in live coding invite analogies with governmental policy writing. The code/policy written by the performer/bureaucrat is interpreted by the computer/government the effect of which is experienced by the population/artificial population who form opinions which add up to affect their voting choices when it comes to elections.

Constantly updating a running process to get closer to AI preferences could be compared to the way that politicians respond to opinion polls and pressure groups.

Chikashi MiyamaModulations for Self-built Instrument with Interactive video

The goal of this composition is to explore interactive relationship among human body, electronic sound, and live-video. The performer controls numerous audio and visual parameters in realtime, employing a pair of self-built sensor gloves, named Qgo. The gloves detect the distance between two hands and the tilt of each hand, and send these data wirelessly to a host computer, using XBee RF modules. The host computer maps the received data onto the parameters of the software synthesizer and the video generating software, running on it. The mappings between the received data and the audiovisual parameters are not fixed; it varies gradually as the piece unfolds. The hardware of Qgo was designed and built by the composer during the period of artist in residence in ZKM, Karlsruhe in 2011. The project was supported by DAAD research grant for doctoral candidates.

THURSDAY, SEPTEMBER 1LUNCH CONCERT IIHAW, FORUM | 13 .30–14 .30

Gerriet K. SharmaXXXXXXXXXXXX for tape

No visual mean can fully show or express what we experience while we are listening to spatial sound compositions as the spatial is a parameter that ads a plastic component to the auditive experience that is mostly different to the text- or picture world. Although there have been many attempts to visualize spatial compositions these attempts can only assist for documentation or understanding structural aspects but cannot “picture” the emergences of the 3D auditory object. A triangular movement of a virtual source on a screen might be something like a triangular shape in the studio space, as long as one

Leah ReidRing, Resonate, Resound for Tape

Ring, Resonate, Resound is an acousmatic composition written in homage to John Chowning. The piece tips its hat to Chowning’s Stria, Turenas, and the beautiful sonic landscape Chowning explored through his research and discovery of FM synthesis. Ring, Resonate, Resound is dedicated to him. The composition explores timbre through dozens of bell sounds, which provide the harmonic and timbral material, structure, foreground, and background for the piece. The composition is comprised of five sections, each examining a different set of bells and materials that interact with them. The piece begins thin and bright, then gradually increases in spectral and textural density until the listener is enveloped by a thick sound mass of ringing bells. The bells gently fade into waves of rich harmonic resonances. The piece was composed using a multidimensional timbre model Reid developed while at Stanford University. The model is based on perceptual timbre studies and has been used by the composer to explore the compositional applications of “timbre spaces” and the relationship between reverberant space and timbre, or rather the concept of “timbre in space.”

Shelly Knotts, Holger Ballweg & Jonas HummelFlock for 3 Live Coding LaptopistsLaptops

Flock (2015) for Live Coders explores flocking mechanisms in network structures as a means of managing collaboration in a live coding performance. Loosely modelling the behaviour of bureaucrats in their interactions with self-regulating political systems, the three performers engage in a live coding election battleground, hoping to win votes from an artificial population. The more votes a performer wins, the more prominent in the final mix that performer’s audio will be.

SuperCollider’s JITlib was chosen as the language of algorithmic politics as SuperCollider is a reasonably neutral language, which places relatively few stylistic limits on performers, allowing them to form their own musical manifestos. Performers will develop their framework code in rehearsals beforehand, allowing them to form individual musical election strategies, before making their policy proposals (in musical form) to the artificial population in performance.

The voting mechanism itself is based on Rosen’s work on flocking in bi-partite decentralized networks (Rosen 2010). Rosen proposes that the most efficient and successful means of managing large group collaborations is through decentralised self-regulating networks. He suggests that these networks can maintain their decentralised non-hierarchical organisations through flocking mechanisms.

In Flock the network is made up of 2 types of nodes: feature trackers (using the SCMIR library in SuperCollider); and AI agents (who have preferences and voting rights). Agents modify their personal preferences throughout the performance depending on the preference of their network neighbours and the feature states of the incoming audio. The amount that each AI agent modifies its preferences depends on its profile value for autonomy. Agents which vote for the current winning party slightly increase their influence over other agents.

Page 35: SMC - Ondiola · smc2016.net smc sound & cm ucsi ompnguti network conference s.t.r.e.a.m. festival conference guide abstract booklet smc hamburg/germany 31. –3.8

PROGRAM N

OTES | C

ONCER

TS

6968

sound events is posted in the wake of a gesture that represents the primary need to do something in response to another. In the remaining three parts there is a construction of micro-events that goes in the opposite direction to the first part being on the one hand the charm of a better world and the fear of the unknown.

THURSDAY, SEPTEMBER 1CONCERT/INSTALLATION: MAELSTROMHAW, GARDEN | 17 .30

Eine Rauminstallation im Innenhof des Finkenau-Campus von Studierenden der Architekturklasse Prof. Lothar Eckhardt/ HafenCity Universität Hamburg (HCU): Marcelo Acevedo Pardo, Victoria Dall, Arne Drewes, Tim Garbers, Matthis Gericke, Justus Griesenberg, Steffen Helm, Peer Klausberger, Leonie Kümpers

Sponsoren: GIG City Nord GmbH und Die Wäscherei/City Nord

Performance: Tänzerinnen der Erika Klütz Schule für Theatertanz und Tanzpädagogik; Choreographie: Jeanette Weck

Musik:....

MAELSTROM ist eine Neuinterpretation der Rauminstallation BLOSSOM von Anfang Juli diesen Jahres im Park der City Nord, die im Rahmen des Performance-Projekts EDEN.EDEN_Garten der Lüste als spiral-geometrische “paradiesische Blüte” aus einer Vielzahl von farbigen Stäben realisiert wurde. Ob im Kleinen bei der Entstehung einer Blüte oder beim gewaltigen Aufeinandertreffen von Stürmen – die Natur zeigt sich häufig in komplexer Form. So scheint es die Art der Dinge zu sein, sich umeinander zu winden und gegeneinander aufzuwiegeln, um dann schlussendlich aufeinanderzuprallen und in einem gemeinsamen Punkt zu gipfeln. In MAELSTROM treffen sozusagen die Ströme aller Himmelsrichtungen aufeinander, finden ihren Höhepunkt im Epizentrum des Strudels und beschleunigen unhaltbar in die Vertikale. Im Auge des Sturms tut sich ein Blick auf und erzählt uns von der Unendlichkeit des Seins.

uses a simple click, burst or white noise sounds. But anything sounding beyond this basic signals has almost nothing to do with the visual idea, shape or architectural believe. The plastic sound object exists only in the ear. It is by all means an alternative “world proposal”. To underline this well known but often “overlooked” situation and to better understand the differences, grafik unten asks, if we can rewire this thought and compose spatial music that guides the ear in space (and time) as if we are “looking”. So what might be “seeing with the ears”? For instance what is an auditory gaze, glance, or staring? And what is with the Ohrenblick?

Huw McGregorMetronic (2015) for tape

‘Metronic’ was the soundscape of the Metro in Athens in 2014. It’s an unusual Theathraphonicsoundscape, which requires no treatment to express its sonic diversity. How we go from one different conceptual space in sound to another can be an obstacle in composition, and using soundscape field recordings the issue becomes highly complex. The work demonstrates some of these issues, and hopefully some methods for traversing them. To explore this as a study, I selected two sine waves for left and right speaker, which would express their gestural signatures within performance space. The sine waves, give no more impression than exactly what they portray, pulsating sonic enter-ties that explore a given space or a synthesized voice performed in abstract and crystallized within the sound field of the performance space. The soundscape of the metro sits within distal space. It is with these parameters we have the opportunity to showcase the depth, from the sound that resonates within the mind, to the barely audible at its furthest distance.

Francesco GalanteItineraires (pour edgar Varèse) (2012–13) for tape

“Itineraires ( pour Edgar Varèse)” is an FM acousmatic music piece. It was realized for the 130th anniversary of Edgar Varèse’s birthday. The form of the piece is segmented in six movements. All sound morphologies were generated by a special group of 8 simple FM. I used the FM synthesis in several my pieces during last 25 years and always I was surprised of its sound possibilities, to generate very ambiguous phenomena (between real/abstract semantic). I composed this music as an hommage to the great french-american composer. Recently I again worked on the stereo spatial image of the piece.

Leonardo “Leo” CicalaKhoisan (2015) for tape

Khoisan is a symbolic piece playing on peculiar morphological elements of this primal language full of hard consonants and popping, explore from the perspective of psychological inner sense of necessity that migration from the dawn of our species still repeats between africa and Europe. The shape of the track is arranged metaphorically in events which take place as a series of stages, steps, in the first part the evolution of

Englisch?

Page 36: SMC - Ondiola · smc2016.net smc sound & cm ucsi ompnguti network conference s.t.r.e.a.m. festival conference guide abstract booklet smc hamburg/germany 31. –3.8

PROGRAM N

OTES | C

ONCER

TS

7170

However, generative music is different than improvisation, in that it produces multiple parts under the control of a single creative mind: a composition, rather than one line within a performance. As such, aspects of artificial intelligence are necessary to control the relationships between the many musical elements within a work such as ! An Unnatural Selection.

Titles to individual movements are derived using the same method of musical construction as applied to Jane Austen’s Pride and Prejudice.

Ryan Ross SmithStudy No. 55 (2016)

David Kim-BoylePoint Studies No. 6 (2016)

point studies no. 6 (2016) for string quartet continues an exploration of the musical possibilities of graphic, open-form scores generated in real time. In the work, performers interpret a set of twenty-four concentric rings each of which contains a rotating radial terminated at each end by nodes of various colors. A series of other colored nodes appears on the concentric rings as the score develops, the color of which denotes various natural harmonics. Various aleatoric processes are used in the generation of the scores primitive materials which creates low-level variations in each realization of the score.

Nicolas CollinsRoomtone Variations (2013–14)

In Roomtone Variations the resonant frequencies of a concert hall are mapped, in real time, through controlled acoustic feedback, and projected as staff notation. The strongest, most resonant pitches appear first, at the left, the weakest at the far right. Once the staves are filled the musicians improvise variations on the notes as they are highlighted, gradually stepping through this site-specific “architectural tone row”. To my ear a good performance hints at Satie’s “furniture music” or (if I’m really lucky) Morton Feldman. My computer program incorporates MaxScore notation software by Nick Didkovsky and Georg Hajdu, whose generous technical assistance is gratefully acknowledged.

Gero KoenigChordeograph. ResonanceGrid II (2016) Quintet

Program notes under Chordeograph. ResonanceGrid I

THURSDAY, SEPTEMBER 1REAL-TIME COMPOSITION, IMPROVISATION & ANIMATED SCORE IKAMPNAGEL, HALL K2 | 20 .00–21 .30

Gero KoenigChordeograph. ResonanceGrid I (2016)Chordeograph solo

My project Chordeograph evolved from a sound and perception research to an instrumental development starting from the objective to realize a dynamical reference system for the continuous transformation between microtonal grids. In Chordeograph.resonanceGrid this work is fulfilled in two experimental setups.

In the first experimental setup complex microtonal grids are transformed by stretching, compression and shifting. By different movements respectively directions, angles and speeds of physical objects on a string plane I achieve continuous transformations from broadband noise to unison. On the basis of graphical scores I develop precise and repeatable playing procedures. Each movement is reflected in the sound in some variation ranging from minimal to intense.

In the second experimental setup microtonal grids form a dynamical reference system: 4 players of the TonArt ensemble fathom out microtonal degrees of freedom between nodes of tonal grids. Sound events played with chinese sheng, bass clarinet, viola and double bass will intertwine with the sonic space created with the electro acoustic instrument Chordeograph. Temporary resonances and harmonically complex constellations form a multidimensional space and time grid moving between stability and the unforeseen. Out of my artistic research after 5 years of continuous development I have transcended the grand piano design and its static tonal system finishing the electro acoustic instrument Chordeograph. The resonatory possibilities of the grand piano are fathomed out dynamically.

Arne EigenfeldtUnnatural Selection. Mvt 2 Much Beauty is Before You (2014)

A modified artificial life system, where musical ideas are born, are passed on to new generations and evolved, and eventually die out, replaced by new ideas. Initial populations of phrases are created by rules derived from an analysis of exemplar music. It is a generative system, in that each iteration/performance produces new material; as such, the performers are reading music they have never seen before, from iPads. One unique aspect of this work is that the complexity of the music must be balanced with its immediate readability: only musicians of exceptional ability can perform it.

Generative music systems are based in the notion of creating complete works of art, producing both the details (i.e. the notes) as well as the overall structure of the music. While this may seem like a conceptual curiosity for the audience – who would be unaware of whether a piece of notated music was generative or traditionally composed – live generative music can (hopefully) produce the same excitement that improvisation imbues: a sense that the audience is experiencing something that has never before been heard, or will ever be heard again.

Page 37: SMC - Ondiola · smc2016.net smc sound & cm ucsi ompnguti network conference s.t.r.e.a.m. festival conference guide abstract booklet smc hamburg/germany 31. –3.8

PROGRAM N

OTES | C

ONCER

TS

7372

Gil DoriDesert Stroll (2013) for solo clarinet and live electronics

Desert Stroll is a micro-tonal mobile, in which the performer chooses how to progress through the piece. It honors the Sonoran desert, as well as the indigenous people of this area- the Tohono O’odham. The composed material derives from a study of Tohono O’odham songs, and features their unique micro-tonality, melodic gestures, and vocal inflections. The electronic element enhances the experience of an ever changing sonic landscape, similarly to the desert itself. The piece found its inspiration in the poem The Desert, by Native-American poet Jeanette Chico.

Carola Schaal: clarinet

Alexander Sigmanfcremapno (2014)

fcremapno is the second in a series of re-mappings of the three-part animation Future Creatures (2013), realized in collaboration with visual artist Eunjung Hwang. Taking also as its point of departure the third movement of fcremaperc (for zither and live electronics), actions are performed entirely on the strings and frame of the piano with six objects of contrasting materials and dimensions, and the electronics are projected via small transducers into the piano. Unlike its predecessor in the fcremap series, however, the piece is cast in one continuous movement, as opposed to three distinct sections.

Composed for Belgian pianist Frederik Croene, fcremapno was premiered in Ghent in December 2014.

Pedro LouzeiroComprovisação no. 5 (2016)

The score for “Comprovisação nº 5” * does not yet exist. Or it no longer exists. It exists only in the present, in the moment while it is being interpreted. It is created in real-time from the improvisation of a soloist through algorithms of musical composition. Almost simultaneously, it is sight-read by an ensemble of musicians.

A network of computers is used as an interface between soloist and ensemble, between improvisation and composition. A software system called “Comprovisador” is used to generate the score and to display it for the musicians of the ensemble. Furthermore, through the “Comprovisador”, parameters of algorithms are manipulated in real-time by the performance conductor/composer, largely affecting the musical outcome in terms of form, instrumentation, harmony, rhythm, density, expression, articulation, among other aspects. Hence, the “Comprovisador” enables the conductor to make compositional decisions, acting in real-time upon improvised musical material while the improvisation occurs. The improviser, in turn, reacts to and interacts with the music being played around him or her (which is already a response to his or her improvisation) forming a dialectical relationship.

The described system – “Comprovisador” – is currently being developed by Pedro Louzeiro in his doctoral program taking place in Évora University, Portugal, with the financial support of the Portuguese Foundation for Science and Technology (FCT) by means of a PhD Studentship.

THURSDAY, SEPTEMBER 1REAL-TIME COMPOSITION, IMPROVISATION & ANIMATED SCORE IIKAMPNAGEL, HALL KMH | 21 .30–23 .00

Sascha Lino LemkeaKKORDeONofffor a pianist, harmonica, cylinder, light & a/v electronics

AKKORDeONoff is part of a series of study-like pieces for musician(s) and their doubles. I am interested in working with the memory of the listener: How can I build new virtual realities out of elements that have already been heard and/or seen. What is real and what is not? Who is dependant on whom?

The piece is inspired by a lot of different things: There is a lamant on Machaut’s death written by Magister Franziskus (ars subtilior, 14th century), the harmonica can play my favorite moment of that piece... It happens that it could relate to a sonata rich in dissonances and simular parallel fifths by Scarlatti, the barock master of Flamenco...which again could meet up with Ravel’s Gaspard de la nuit. Inspiring to me were also the traditional rituals of piano recitals and Leopold Bloom from Joyce’s Ulysses, especially during the scene at the cemetary.

a … center pitch of the piece.

AKKORD … a phantasy about a minor chord or about a sequence of four chords from a lamant of Machaut’s death by Magister Franziskus … Akkordarbeit in German means piece rate and is made of the two words chord and work.

e … electrified … producing virtual acoustic and visual doubles using electricity … echt = real: What is real and what is virtual? Beginning of deliberate confusion using this labyrinth of a machine of which the pianist becomes a part of.

AKKORDeON … the harmonica as the accordion of the poor … similarities between the sound of party transformed piano resonances and the sound of the accordion … the natural rocking gesture of playing an accordion.

ONoff … main gesture of the piece … perceiving time as a sequence of slowly, stutteringly played frames of a film.

Cat Hope Pure (2014)German Premiere

This piece was written at the Visby International Composers Centre, Sweden in March, 2014, with the assistance of the ISCM and the Australian Music Centre, as part of a Churchill Fellowship.

1 Zeile kürzen --->

Page 38: SMC - Ondiola · smc2016.net smc sound & cm ucsi ompnguti network conference s.t.r.e.a.m. festival conference guide abstract booklet smc hamburg/germany 31. –3.8

PROGRAM N

OTES | C

ONCER

TS

7574

FRIDAY, SEPTEMBER 2LUNCH CONCERT IIIHAW, PRODUKTIONSLABOR | 12 .15–13 .00

Johannes KreidlerTwo piecesfor clarinet and video

This piece was sponsored by the state Berlin

Andrej Koroliovirritate me (Herbst)for clarinet, video and distortion pedal

Dong ZhouSchizophonic (2016)for clarinet, electronics, live video

The Canadian scholar R. Murray Schafer voiced a critique by describing radio as source for ‘defamiliarisation’ from the everyday environment. Identified as a sensual alienation by Schafer he portrayed this media experience as ‘schizophonic’. When Schafer coined radio as schizophonic, he criticized the perceptional split of sounds. This piece is a further explanation of ‘schizophonic’, which takes out not only sound but also image and narrator’s experience from the original context. Audio and visual materials recorded in Shanghai, Osaka and Hamburg are intermingled with solo bass clarinet to show a memory with trauma and distorted landscapes stored there.

Yu-Chung TsengRhapsody for clarinet and tape

Samuel PenderbayneMY NEW CROSSOVER-CLASSICAL EP!!! (2016) for clarinet & Max4Live

Those who ignore the effects of the internet on the music scene do so at their own peril: we have a new hierarchy. Big agents and record companies are no longer the ‘filters’ of music for consumers since artists can click their way happily from song idea to inter-national publication. Professional audio production software at affordable prices has brought the exclusive studio to the indie musician; the supremacy of streaming services over mechanical production has brought their tracks to the same sales-rack as established artists; online blogs and forums have broken down the traditional promotions lines and allowed hundreds to reach large audiences. This process is simultaneously a release from the previous concentration of power at A&R offices in major CD labels and a destruction of security in the music industry. The famous ‘Big Four’ record companies are a shadow of what they once were. So, the dictator is toppled, but what next?

Jason FreemanShadows (2015)

In Shadows, the pianist reads an open-form score from a laptop screen, choosing his own path through a series of connected musical fragments. At the same time, the laptop listens to the pianist, tracks the decisions he makes about what to play, and constantly updates the score in response. This dialogue between pianist and computer, actuated through a dynamic score, serves to amplify the expressive decisions made by the pianist, to subtly push him in new musical directions, and to create large-scale structural arcs in the music.

Shadows consists of four movements, each of which explores the pianist-computer-score interaction from a different perspective:

I. Traces. The score consists of twelve chords followed by their echoes. The speed at which the pianist moves from chord to chord affects how much of the score is displayed and how much is hidden.

II. Chorale. The pianist plays from a selection of five chords and three embellishment notes. Each time a chord or note is played, its harmonic density and complexity is changed.

III. Perpetual Quiet. The pianist builds arpeggios from a constantly changing set of pitches.

IV. Perpetual Melody. The pianist chooses from a combination of rhythmically driven, short melodic motives and chords. Connections between fragments are added and removed based on the amount each fragment is being played.

I wrote Shadows for pianist Melvin Chen, during an artistic research residency at IRCAM in Paris. Many thanks to Arshia Cont and Jean-Louis Giavitto from IRCAM and to Dominique Fober from GRAME for collaborating with me to extend their Antescofo and INScore software, respectively, for use in this piece.

Se-Lien Chuang & Andreas WeilerMomentum TonArt_SMC (2016)

Fragments of memories (produced both by human beings and by computer) generate a synthesis of sounds and visuals. The sounds of live instruments serve as interface in an audiovisually interactive concert that merges a sophisticated instrumental sound and realtime computing in an amazing improvisation.

While visual images and processes are being generated during the concert, a multi channel granular synthesis, spectral delays and virtuoso chances fit together minute tonal particles that make up the instrumental sounds into a constantly changing acoustic stream made up of different pitches, durations and positions in the electro-acoustic space. The musical and visual components interact and reciprocally influence each other in order to blend into a unique, synaesthetic, improvisational work of art.

Page 39: SMC - Ondiola · smc2016.net smc sound & cm ucsi ompnguti network conference s.t.r.e.a.m. festival conference guide abstract booklet smc hamburg/germany 31. –3.8

PROGRAM N

OTES | C

ONCER

TS

7776

Clemens von ReusnerDefinierte Lastbedingung (2016)for tape

“Definierte Lastbedingung” (engl. defined load condition) is based upon the sounds of electromagnetic fields as they arise when using electric devices. Numerous recordings of electromagnetic fields were made at the Institute for Electrical Machines, Traction and Drives (IMAB) of Technical University of Braunschweig (Germany) with a special microphone. This sound material has little of what a “musical” sound is intrinsically. There is no depth and no momentum, no room. In their noisiness these sounds are static, though moved inside. They usually seem bulky, harsh and repellent, even hermetic as the well known electrical hum.

“Defined load condition” (a technical term when testing electrical machines) works with these sounds which are analyzed in their structure, reshaped and musically dramatized by the means of the electronic studio. The main frequency of electrical current in Europe is 50 hertz and hence 50 and its multiples is also the numerical key this composition is based upon in a variety of ways.

Takuto FukudaAssimilation (2016)for tape, double bass and live electronics

Assimilation was composed for a double bass and a computer at die Kunstuniversität Graz in Austria in 2013. It is an attempt to make organic relationships between three elements characterized by different quality of motions; exploding attack, monotonous succession, and metallic harmony.

Various modes of correspondence between these three types of motions are explored during the course of the piece. Gradually they develop interdependently to a climax. At the end of the piece they are converged to a single stream through a superimposition of the different types of motions.

Savannah AggerUndercurrents (2016) for tape

Paul Hauptmeiersines, crickets and a few (2016) for tape

Julia MihàlyXXXXXXX (2016)

Parallel to this internet-led democratic rebellion is the rise of Electronic Dance Music (EDM). No other genre so perfectly captures the new production process of mainstream music: the Indie EDM producer has an idea, uses digital instruments and samples to bring it to life, mixes and masters it himself or via competitive online services, uploads it to streaming services like Spotify and Soundcloud, and promotes it through social media, online forums and online viral marketing campaigns. Countless artists such as Skrillex, ZHU, Flume, Disclosure, Stromae, Zedd etc. have emerged from bedroom wannabes to international superstars through this process. The result has been multifaceted, with some creative talent emerging and hundreds of thousands (if not millions) of formulaic tracks being produced and going straight to the forgotten drives of the internet. The amount of produced and subsequently discarded music in this genre is something new for music history.

In this work, the clarinettist creates and uploads to the internet a new ‘EP’ of Crossover-Classical music called MY NEW CROSSOVER-CLASSICAL EP!!!. This will be achieved through ‘comprovisation’, whereby the performer improvises thematic and aesthetic gestures to a structured rubric, functioning as a ‘road-map’ score. This matches the formulaic process of EDM, where almost all tracks are produced based on formulaic processes, consisting of three ‘tracks’ and a remix (generated automatically by the Max4Live patch).

Carola Schaal: clarinet

FRIDAY, SEPTEMBER 2LUNCH CONCERT IVHAW, FORUM | 13 .30–14 .30

Hiromi IshiiRyojin-fu (2013) for tape

This multi-channel sound-fantasy was inspired by a legend of a Japanese emperor who was religious, was devoted in Imayo (Buddhist chant), but had to fight many battles. The material sounds are; 1. Singing (male solo) voice of Imayo, 2. Sounds and noises recorded at a Buddhist ceremony, 3. Grain sounds of rice. They have mainly been processed using cross synthesis and granular synthesis. The processed sounds were given different characters of movement and designed in a three-dimensional space; the sounds processed from the 1. appear with variations (but never as the original sound), and lead a boy’s voice-like sound finally. The massive sounds relating to the 2. move slowly, and develop as a sound-wall. The sounds produced from 3. move quickly and irregularly like flying living objects. The original version of this piece (20 channels) was composed using the Zirkonium 3D sound space system.

Page 40: SMC - Ondiola · smc2016.net smc sound & cm ucsi ompnguti network conference s.t.r.e.a.m. festival conference guide abstract booklet smc hamburg/germany 31. –3.8

PROGRAM N

OTES | C

ONCER

TS

7978

Manfred StahnkePartch Harp (1987/89) for microtonal harp and DX 7 II

This piece was written in 1987/89 for a harp in scordatura, containing “natural” just major thirds (5/4) and “natural” just minor sevenths (7/4). The synthesiser’s tuning follows the harp tuning and allows these just intervals for any played pitch up and down. The numbers 5 and 7 indicate the partials of a fundamental tone “1”. Thinking in “whole numbers” looks quite mathematical, but it is very closely linked to how our ear works. Apparently, the ear tends to “simplify” what it hears. If we listen to a so-called “tempered” interval, the ear adjusts these intervals mentally to the simpler forms, and will accept a

“detuned” third as a “natural” third with some added noise features.

In my piece Partch Harp, however, the “noise” is artfully incorporated. If the devia-tion from the simple interval is too big – say a quartertone – then the ear cannot adjust anymore and detects a “wrong” interval. This is especially true for my octaves and fifths, the very simple 2/1 and 3/2 proportions. Imagine three “just thirds” on top of each other C-E-G#-B#. The summed-up deviation from an octave C-C is almost a quartertone. The same is true for my synthesiser tuning, where every minor second is “short” by 3.5 hundredths. If you superimpose 7 of them to get a fifth, this strange “fifth” misses by (7 x 3.5) 24.5 hundredths, a very audible eighth tone.

The strange – or charming – feature of Partch Harp is that the harp is tuned in perfect octaves, and the synthesiser is not. Because of this I get a strangely drifting vessel in an ocean of well tuned asymmetry.

As the title: Harry Partch (1901–73) invented a just tuned 43-tone scale, and to play it, he built his own collection of instruments.

Jean-Claude RissetNature Contre Nature (1996)for percussion and tape

In this work dedicated to Thierry Miroglio, the percussionist plays in dialogue with sounds generated by a computer. The computer gives examples of paradoxical rhythmic processes, which incite the performer to exercise his skill to do the same.

The first exercise induces rhythmic variations by acting upon the content of the sounds rather than upon the time itself: intensity, pitch or timbral differences give rise to melodic fissions or breaks which create ‘illusory’ rhythmic figures – a process illustrated by Ligeti in the sixth of his Études pour piano. The second exercise consists of accelerating or slowing down while keeping a fixed pulse. The third suggests endless accelerations, (a rhythmic snake biting its tail) or accelerandi which end up slower than the initial tempo. In the fourth exercise, the computer pursues accelerations toward chaos, with a tribute to Xenakis’ stochastic forms and Ligeti’s delirious machines. The soloist takes part in this wild chase and concludes the work with his own cadenza.

Auditory illusions seem to run counter to the physical nature of sound; they reveal the nature of auditory perception – nature against nature. Rhythm is not mere chronometry. The unusual rhythmic processes heard here have been explored by the composer and also by the researchers Wessel, Van Noorden, Knowlton, Bregman and Arom. The computer sounds have been synthesised or processed with the MusicV program, taking advantage of the acoustic instrumentarium of Thierry Miroglio.

FRIDAY, SEPTEMBER 2JOHN CHOWNING & FRIENDSKAMPNAGEL, K2 | 20 .00–21 .30

John ChowningTurenas (1972) for tape

An anagram of “Nature’s” the title implies and the music invokes transformations of natural-like sounds … metal to wood, glass to membrane, winds to noise … in motion through ever-changing avian curves as if Brancusi’s “Bird in Space” had been magically freed to follow its elegant lines.

This was the first widely presented composition to make exclusive use of frequency modulation synthesis, discovered by Chowning in 1967. It is also makes use of a technique for creating the illusion of sounds in motion through a quadraphonic sound space. The original version was computed on a Digital Equipment Corporation (DEC) PDP-10 computer and recorded on a 4-ch Scully recorder. In 1978 Turenas was regenerated on a real-time digital synthesizer designed by Peter Samson (the Samson Box), and in 2009 Bill Schottstaedt (CCRMA) created a software emulation of the Samson Box that allowed Turenas to be recomputed to meet current audio standards. [It is this version that is presented here.]

Present at the premiere of Turenas in Dinkelspiel Auditorium, Stanford University on April 28, 1972, were Martin Bresnick, Andrew Imbrie, Gyorgy Ligeti, Loren Rush, Leland Smith and Ivan Tcherepnin, who wrote the following notes in 1973 for a concert at Harvard University.

This computer generated tape composition makes extensive use of two major developments in computer music pioneered and developed by John Chowning, working at Stanford’s Artificial Intelligence Lab. The first involves the synthesis of moving sound sources in a 360-degree sound space, which takes into account the effects of the Doppler shift. The second was a breakthrough in the synthesis of “natural” (as well as almost

“supernatural”) timbres in a simple but elegant way, using accurately controlled frequency modulation. This is the technical background, but the piece is not about that background.

The title “Turenas” is an anagram of “Natures”, evoking the way sounds “tour” through the space, transparent and pure, produced by the most technologically sophisticated means yet tending to sound perfectly natural, as if a dream could come true.

Ivan Tcherepnine (1943–1998)

Leland Smith’s program Score was used to create the input data for the composer’s spatial and synthesis algorithms. In 2009 Bill Schottstaedt (CCRMA) created a program that allowed Turenas to be recomputed to meet current audio standards.

1 Zeile kürzen --->

Page 41: SMC - Ondiola · smc2016.net smc sound & cm ucsi ompnguti network conference s.t.r.e.a.m. festival conference guide abstract booklet smc hamburg/germany 31. –3.8

PROGRAM N

OTES | C

ONCER

TS

8180

A single soprano engages a computer-simulated space with her voice. The computer allows us to project sounds at distances beyond the walls of the actual space in which we listen – to create an illusory space. Her utterances launch synthesized sounds within this cavern-like space, sounds that conjure up bronze cauldrons, caves, and their animate inhabitants, sounds of the world of the Pythia modulated by our technology and fantasy but rooted in a past even more distant than her own – the Pythia’s voice is the voice of Apollo.

Selected pitches of the soprano’s voice are detected by the computer running a program written by the composer in MaxMSP. The soprano’s voice is transmitted from a small head microphone to the computer where it is spatialized, mixed with synthesized sounds and then sent to the sound system in the auditorium. As each sung “target” pitch is detected, the program advances in search of the next. The overall pace of the composition, therefore, is determined by the soprano. The pitches are from a scale division of powers of the Golden Ratio, rather than the traditional division of powers of two (octaves). The inharmonic partials of the synthesized sound, also derived from the Golden Ratio, are ‘composed’ to function in the domains of pitch and harmony as well as timbre, an idea first conceived by John Pierce and brilliantly realized by Jean-Claude Risset in “Mutations” 1969.

Voices was commissioned by the Groupe de Recherche Musicale of the French radio and first performed in March 2005 in the Salle Messiaen. Version 2 was performed in March 2006 as part of the Berkeley Symphony series and version 3 was presented at the Tangle-wood Music Festival in August 2011.

[The text is pieced together from Aeschylus, Aristophanes, Heraclitus, Herodotus, Lucan, and Plutarch, with interpolations by the composer.]

Ah, Prayer to Gaia,Stone walls sing her song.Ah, Parnassus’ shrines,Ah, Corycian rock where Nymphs abound.Ah, Phoebus came.Python fought! Python slain!

Ah, Song to Gaia!

[ipsissima verba (in bold ) interjections of fragments of purported and theatrical utterances- the very words of the Pythia. The reported states of mind of the Pythia during an utterance, ranged from matter of fact, to ecstatic, to frenzied, to glossolalic.]

I know the number of the grains of sand and the extent of the sea, and understand the speech of the dumb and hear the voiceless!

[Asserting her prophetic abilities to Croesus before his campaign against Persia-Herodotus, I 47]

Apollo, he saw from theyawning cave,the air was full of voices,Ah… [Improvisation – a young Pythia discovers the magic of caves]Voices murmured from the depths.

Ah, Song to Gaia!Dark blood trickles, in prophecy of the woe to come. But rise, hurry from the shrine, and steep your soul in sorrow!

[To the Athenians facing the Persians before the second more favorable

“Wooden Wall” oracle-Herodotus, VII 140]

Here in this shrine,having sipped from the spring,laurel burned,I wait for the spirit of Apollo.

From near and farmen come to hearSounds from my breast,as when Etna boils!

[Utterances in a state of prescient ecstasy from the intoxicating and disorientating effects of the ethylene vapors (pneuma)

Constantin BasicaTell Me What to Do (2013) for clarinetist, conductor, Kinect, live electronics, and live video

This piece is recommended in cases of advertisement overload and can be performed without prescription. For dosage, persisting symptoms, or side effects, please observe the audiovisual information or contact the composer.

Georg HajduBeyond the Horizon (2008) for 2 Bohlen-Pierce clarinets and synthesizer

Beyond the Horizon is the result of my continuous preoccupation with the Bohlen-Pierce scale, which – characterized by its particular acoustic qualities – is an almost perfect fit for the clarinet.

What motivated this piece was the purely hypothetical and philosophical question of what the world would look like, if it consisted only of odd numbers, as is the case with the clarinet spectrum. But these are exactly the questions that inspire composers to create parallel worlds contrasting the omnipresence of 12-tone temperament. Since the Bohlen-Pierce scale is based on the just twelfth, or the tritave, as Pierce calls this octave replacement, it seemed a logical step to use a computer to construct a stretched spectrum whose 2n partials lines up with the 3n harmonics. This artificial, bell-like sound was slightly modified to reduce what we call sensory dissonance, which will result from the sounds of the Bohlen-Pierce intervals and chords. We thus achieve coherence between the spectral, harmonic and tonal dimensions, something we also encounter in traditional tonal music.

The text written by cosmologists Lawrence M. Krauss und Robert J. Scherrer (End of Cosmology? Scientific American 298, 46 – 53 (2008)) and recited by Marcia Lemke-Kern, is an incentive to start thinking about the existence of a parallel (tonal) world, that may eventually disappear from our view, if we don’t catch the moment.

This performance is dedicated to the first discoverer of the scale, Heintz Bohlen, who sadly passed away this February as well as to its second discoverer, John R. Pierce, who used to be John Chowning’s mentor at Bell Labs and later his colleague at Stanford University.

John ChowningVoices (2011) for voice and electronics

Voices is a play of imagination evoking the Pythia of Delphi and the mystifying effects of her oracular utterances. For nearly a thousand years, the oracle held a place of prominence in the history and culture of ancient Greece – always a women whose roots, some scholars believe, are found in the cult of Gaia, the Earth Mother, followed by a succession of goddesses beginning with Themis, Phoebe and finally supplanted by the God Apollo, whose priestess was the Pythia. Her utterances were believed to be his

“voice” in answer to questions posed to the Pythia by supplicants from all over the ancient world – questions that ranged from the mundane to the portentous.

Page 42: SMC - Ondiola · smc2016.net smc sound & cm ucsi ompnguti network conference s.t.r.e.a.m. festival conference guide abstract booklet smc hamburg/germany 31. –3.8

PROGRAM N

OTES | C

ONCER

TS

8382

Gerriet K. Sharmagrrawe (2013) for tape

The composition tries to investigate the sculptural presence of 3D sound objects using primarily the IEM icosahedral loudspeaker. How can we compose and reproduce the “musical counterpart” in space?How can we help the plastic sound object to emerge?The composition raises the question of the self-localisation of individuals in their (sonic) environment or world.It is a continuous play with the perception of movement, distance and perspective.Where is the composer, where is the listener?Who is the composer and when does “world” come into being respectively when does it withdraw itself from the composer and or the listener?Can we look forward to finding an answer?

Richard HoadleyCalder’s flute for flute and computer

‘Calder’s flute’ is a composition for live instrumentalist and live computer generated sound. The music has two aspects: one part is electronically synthesised using automatic, algorithmic schemes (although the synthesised sounds are very convincingly like a piano, they include microtonal elements and amplitude envelopes not feasible on the physical instrument); the second part uses a selection of these algorithmic schemes to generate common practice music notation. The composition is generated, displayed and performed live. It is of approximately ten minutes’ duration, although this, too, can be set algorithmically. Through rehearsal the performer has a clear idea of the nature of the musical phraseology with which they will be presented, but they will not have seen the detail of the music before.

Constantin BasicaTranshumant (2013) for fixed media

“Transhumant” is an adjective derived from the word “transhumance”, which denotes the seasonal movement of people and their livestock between summer and winter pastures. The short film circles around the idea of globalization, a phenomenon that is known to affect national identities and traditions. The protagonist is torn between two worlds, the modern and the traditional, which are symbolized by the urban and rural landscapes. He is not able to adapt to progress at the same pace as his surroundings, but, simultaneously, he feels alienated from his native environment. Furthermore, visual effects were used to illustrate his fear for the unknown and, also, the dichotomy of modern versus traditional.

Transhumant was created for performance with ensemble, live electronics, and live-synced projection, but it can also be experienced as fixed media.

The film was shot at the DimitrieGustie National Village Museum, Bucharest (RO) and in Hafencity, Hamburg (DE).

The music was recorded with Ensemble Dal Niente.

known to have been present in the geologic formations of Delphi and emanating from the chasm beneath the Temple of Apollo.]

Pneuma, echo, voices,in dark cavern, spacious vault

Ah, Song to Gaia! When the swallows, fleeing before the hoopoes, shall have all flocked together in one place, and shall refrain them from all amorous commerce, then will be the end of all the ills of life; yea, and Zeus, who doth thunder in the skies, shall set above what was once below.

[To the women of Athens, prophesying the success of the withholding of their charms-Aristophanes, Lysistrata]

But my voice not always willing.

Ah, Song to Gaia! Men seeking oracles, let each pass in, inorder of the lot, as use allows; for I prophesy as the god leads … What horror! He’s just, just sitting there, his hands, dripping, dripping blood, and sword drawn!

[Before and after entering the shrine and finding blood covered Orestes and the Furies-Aeschylus, The Eumenides]

Ah, I wait for his spirit Ah, Apollo! Ah, Here in my breast, Apollo!I follow his sign, Ah, I follow his sign,my words without smile or charmthat reach a thousand years. Ah, Apollo! Ah,Words that reach a thousand years,by my song.

Performers: Maureen Chowning, Carola Schaal, Nora-Louise Müller, Lin Chen, Marcia Lemke-Kern, Andrei Koroliov, Gesine Dreyer, Manfred Stahnke and others

FRIDAY, SEPTEMBER 2CONCERT-PARTY BY JACOB SELLOKAMPNAGEL, K4 | 22 .00–00 .00

SATURDAY, SEPTEMBER 3LUNCH CONCERT VHAW, PRODUKTIONSLABOR | 13 .15–14 .15

Artemi-Maria GiotiMagnetic fields (2016) for electromagnetically augmented piano

The electromagnetically augmented piano is an autonomous, self-observing system consisting of a microphone and various computer-controlled electromagnets and solenoids that are placed inside the piano soundboard. Through the microphone the system is able to sense its immediate environment and detect any additional input to that generated by the system itself. When additional (human) input is detected, the system adapts its output correspondingly. The relationship between the performer and the computer system is governed by forces of attraction and repulsion and is based on mutual adaptation to each other’s changing behaviour. All sounds produced in the piece – including those generated by the computer – are strictly acoustic, the electromagnets being used to actuate the strings only.

1 Zeile kürzen --->

Page 43: SMC - Ondiola · smc2016.net smc sound & cm ucsi ompnguti network conference s.t.r.e.a.m. festival conference guide abstract booklet smc hamburg/germany 31. –3.8

PROGRAM N

OTES | C

ONCER

TS

8584

The first section renders a convergent process of three musical elements characterized by waving motions between tone and noise, sustained tones with portamento and a melody outlined by tongue rams as well as lip slaps, respectively. The second section depicts a scene that electronics dominate prominently the instrument. The third section draws a transformative development from a melody to an ascending passage. At the end, the piece finishes as if disappearing into chaos by echoes. All sections are successively performed.

Wilfried JentzschParticle World for fixed media

Visual Music pieces are often required to have the same importance in images and music. In this composition both media are closely related at the level of the smallest unit of auditive and visual perception: Particle Synthesis. The relationship between both media is primarily interactive. Sound characteristics react directly to the particles. A sound of the lute constitutes the unique sound source. It has been processed by granular transformations, spectral extraction and spectral compression. The spectral compression is a new methode of sound transformation which produces a large scale of microtones.

SATURDAY, SEPTEMBER 3COPECO CONCERT IKAMPNAGEL, HALL K2 | 18 .00–20 .00

SATURDAY, SEPTEMBER 3COPECO CONCERT IIKAMPNAGEL, HALL KMH | 20 .30–22 .30

Jones MargarucciInhabited Places_part II (Behind Two Holes) (2015) for tape

Inhabitated Places is a series of three pieces based on the concept of algorithmic composition. Although the general shape of these pieces has been determined in a conventional way, every sound that one can hear are selected in real time by different algorithms written in SuperCollider. These algorithms choose randomly audio files from different folders and play them at different speeds (time stretching) and in different moments. This pseudo-random process was also applied to the spatial domain, in fact in this case the amount of reverb was determined randomly between a minimum and a maximum value, and the movements of sounds – elevation and pan position – were determined by a noise generator. The sound materials used come mainly from different records and processing of my improvisation with guitar and/or electroacoustic devices and sounding objects. These pieces has been composed at the EMS studios in Stockholm.

Leng CensongJiang Xue (2014) for Yang Qin, 8 channels audio and Video

Qing Bo (Plainness), Dan Ran (Cool), and Ningjiang zhiyuan (Keep cool it will win). To use pitch processing of the Spectral music and through the computer modulation of Simple wave according to the harmonic series sampled of Guqin, and combined it with the deformation of sampling waveform of Guqin, it tried to express the individual specific imagination of the poem Jiang Xue (snow river), as well as a new acoustics realm of combination an artificial one with natural one. The software used: Spear, Max/Msp and etc.

Ayako SatoAugust, blue colored green (2015) for tape

The joint project with Tokyo University of the Arts and École Nationale Supérieure des Beaux-arts de Paris, had a performance “nature and me” at Echigo-Tsumari Art Triennale on August 2015. This piece “August, blue colored green” is an electroacoustic piece that was re-construction of “nature and me” by fragments of music for above project and sounds recorded during the project.

Takuto FukudaBeyond the eternal chaos (2014) for flute and electronics

Beyond the eternal chaos was composed at the University of music and performing arts Graz in Austria in 2014. This piece explores possible transformations of musical elements by applying interpolation algorithms. Several transformative behaviors are seen at several levels of the piece such as changes of a prominence between the flute and electronics dependent on sections, transition between noise and tone in a phrase, morphology from a phrase to another phrase, and so on. The composition is divided into three sections.

Page 44: SMC - Ondiola · smc2016.net smc sound & cm ucsi ompnguti network conference s.t.r.e.a.m. festival conference guide abstract booklet smc hamburg/germany 31. –3.8

PROGRAM N

OTES | I

NSTA

LLAT

IONS

8786

create them are very complex, far more complex that can be understood but simply looking at an image over time. I therefore think that adding an audio element to them allows the listener to perceive far more information than would ever be possible through the visuals alone. My sonifications stem from the same datasets that were used to create the visualisations, the aural and visual experiences are therefore directly correlated. The music does not only work as a soundtrack but is an equal partner to the video.

I am particularly interested in transmitting the intense sense of energy in the behaviour of dark matter, but also the idea of spatial location; both are also inherent to musical composition! The installation presents a number of different data and sound mapping choices in succession, before reaching a synthesised version of what I, as a composer, consider to be the most scientifically and musically interesting and valuable combination of sonification decisions.

ALEXANDER SIGMANDETRITUS IIHAW, R4

detritus II (2009) was realized in collaboration with New York-based filmmaker/video artist Colin Elliott. As the final installment in Nominal/Noumenal, two interlocking cycles of works for soloists, chamber ensembles, electronics, and video, the piece was intended as a drawn-out act of “pulling the plug” on the cycles. The audio ingests instrumental samples derived from several Nominal/Noumenal pieces, as well as both fragments of French proto-surrealist writer Lautréamont’s Les Chants de Maldoror that figure prominently detritus I, the immediately preceding composition for countertenor, ensemble, electronics, and video, and expressive indications found in the score to detritus I. By the same token, segments of the violent and sensual Lautréamont text are manipulated in the visuals, whose raw, grainy, and distorted qualities reflect the anti-digital nature of the electronics. The materials employed in both the visual and auditory domains range from the highly abstract (color fields in the video, noise-bands in the audio) to the highly iconic (landscapes and machine images in the video, urban/industrial environmental sources in the audio).

SEIICHIRO MATSUMURASOUNDING KALEIDO (2015)HAW, R5

Sounding Kaleido is the interactive audiovisual installation. Participants interact tomake the kaleidoscopic visual design in real time. The web camera captures pictures of a part of their own bodies such as faces or colors of clothes. Those materials are used for structuring the kaleidoscopic picture. Participants can control the size of the kaleidoscopic picture by the transition of amplitude of their vocal input. The audio outputs are added feedbacking delay effect and pitch shifting effect to their own voices. Pitch shifted voices are layered as 5 voices. Those multiple voices correspond to the metaphor of the increased parts of their own faces or parts of bodies in kaleidoscopic pictures. There are purposes that Sounding Kaleido aims.

AUGUST 31–SEPTEMBER 3INSTALLATIONSHAW | 9 .00–18 .00SPENCER TOPELVOX NIHILIHAW, A1

Both a performance and installation, Vox Nihili (voice of nothing) contains an ensemble of five electronically-prepared hybrid instruments called “surrogates”. As a concept, it explores the boundaries of perception and physics to create an imaginary landscape, where the essence of the human voice is lost, left endlessly searching for its body. This installation version features only the musical materials and instruments of the surrogate portion ensemble, further emphasizing absence of human engagement. Instead, the instruments as machines have assumed the role of performer, creating a provocative and engaging atmosphere. Pauses between small episodes evoke poetic-like structures, and melodic parts appear as incomplete fragments, becoming comprehensible at fleeting moments within the context of other fragments, but only as shadows of a music not audible.

NÚRIA BONETSONIFICATION OF DARK MATTERHAW, R1

We cannot directly perceive dark matter. But it is thought that dark matter and dark energy account for 95% of the universe’s mass-energy; dark matter contributing 26.8% of it. We know that they dark matter should exist due to its effect on the universe, even though we cannot directly observe it. In fact, dark matter does not absorb nor emit radiation. The existence of dark matter and dark energy is hypothesised because the gravitational effect of the universe is far superior to the behaviour expected from the accountable ordinary matter only. That is to say, if we only what we can perceive exists, we cannot explain why the universe works the way it seems to do. Representing dark matter is then a challenge, both visually and aurally. As discussed, it does not absorb or emit radiation so we cannot see it. There is no transmission of soundwaves either as there are no molecules through which they propagate. This work aims to make dark matter cognitively understandable while remaining truthful to its essence. I have worked with visualisations of simulations of dark matter produced at the SLAC National Accelerator Laboratory at Stanford University; they have produced simulations of dark matter behaviour for example dark matter streams and clusters of dark matter forming halos. These are incredibly complex simulations as the team have developed a novel approach to visualisation. Unfortunately, these visualisations were silent. The datasets used to

Uhrzeit?

Page 45: SMC - Ondiola · smc2016.net smc sound & cm ucsi ompnguti network conference s.t.r.e.a.m. festival conference guide abstract booklet smc hamburg/germany 31. –3.8

PROGRAM N

OTES | I

NSTA

LLAT

IONS

8988

HENRIK FRISK AND ANDERS ELBERLINGIL SE TOURNA (2013)HAW, E39

This work, commissioned for the performance installation Go to Hell in October 2013, is a reflection on the myth of the famous musician and poet Orpheus whose attempt to save his loved Eurydice from the underworld ends with fatal consequences. There are many different transcriptions of the myth, the most commonly referred version is from the time of Virgil in which Eurydice died from a fatal bite while attempting to escape a satyr. Another Roman poet, Ovid, wrote a version in which Eurydice’s death followed from her dancing with water nymphs on her wedding night. The moral undertone of this version is fascinating and opens up for different interpretations. Furthermore, then ending exists in several versions and with different, less tragic outcomes.

In this work we have superimposed the different versions but focused on the moment of Orpheus’ turning around, the fatal mistake that makes him loose Eurydice for good. Giving in to his desire to again see the face of his loved one, or is it perhaps his anxiety that Persephone and Hades have fooled him, is undoubtedly a big mistake? The video is a long ascending movement from the underworld and the original version with three screens made it impossible to see all screens at the same time. The viewer finds herself in a position similar to Orpheus’ dilemma.

SIMONE PAPPALARDO & GIOVANNI TROVALUSCIRIZOMA ALCINAHAW, E42

Fairy Alcina (Orlando Furioso, Ludovico Ariosto) turns her lovers into plants, animals or springs. The beautiful nature sorrounding Alcina hides the secret of her loneliness. In Rhizome/Alcina, plant sculptures emit vowel sounds. The raw acoustic material consists of some interviews. Gianni Trovalusci asked questions related the idea of “light” to local people in the area of L’Aquila, a small Italian town recently destroyed by an earthquake. The acoustic characteristics of these raw materials are altered through a series of digital processes that inflect their timbre. The processes parametres are automatically indexed usung data from:

1. the resonance frequencies of the sculptures themselves;2. statistics from the Internet looking through the pages of social networks

the word “light” in tweat and messages in which you would not expect to find this word (war, natural disaster, etc.);

3. from light that surround the sculptures.

Installation is interactive, the audience may play the sculptures by acting on pumps. The installation can be completed by a live performance of duo Simone Pappalardo – live electronics – and Gianni Trovalusci – flutes. The flute and electronic animate the quiet air of this small garden.

• It brings out the optical phenomena happened inside the kaleidoscope to the outside of its cylinder.

• It derives the experience that people can extend their behaviors and form it up to both geometrical and auditory aesthetics directly.

Sounding Kaleido is implemented by using Pure Data and GEM that processes both audio and visual. This installation was exhibited in Japan and Poland in 2015.

Exhibitionsrecords: • Niitsu Art Museum in Niigata, Japan (20. 7.–23. 8. 2015) • Shibuya Egg art festival in Tokyo, Japan (25. 10.–3. 11. 2015) • Audio Art 2015 in Krakow, Poland (14. 11.–22. 11. 2015)

MONIKA KOWALCZUK, BENIAMIN GŁUSZEK & TYMOTEUSZ WITCZAKTHE NEUROMUSICHAW, FLUR

Our neurofeedback-based installation allows every interested person to experience their own neuromusic – kind of music related to humans’ brainwaves in the real time, so users can percept a sounding realization of their brain activity. The brainwaves are measured by the electroencephalograph (EEG), interpreted and transformed into music by our software designed in MAX/MSP and based on some biofeedback knowledge.

Each user will create a unique musical phenomenon, because each person has individual brainwaves and approach to neuromusical experience. There is no obvious border between the meaning of terms such as ‘composer’, ‘performer’ and ‘audience’. Some people want to control musical form and structures intentionally and volitionally, but others can treat neuromusic as some kind of ‘mirror’ that shows some informations about their mind state. The form of this musical piece every time depends on users’ brainwaves.

In our installation musical structures change mainly with some biofeedback factors related to users’ relaxation or focus of attention (eg. Theta/Beta or Alpha/Theta ratios etc.), but also raw EEG signal is used to modulate amplitude of some sounds in some moments.

The first version of this installation was presented at Paradigm Electronic Arts in Edinburgh (March 2016).

In current version sound samples, compositional structures and generally a aesthetic aspect of the music is provided by Tymoteusz Witczak – a composer and a sound engineer specialized in contemporary and film music.

<--- Englisch?

Page 46: SMC - Ondiola · smc2016.net smc sound & cm ucsi ompnguti network conference s.t.r.e.a.m. festival conference guide abstract booklet smc hamburg/germany 31. –3.8

PROGRAM N

OTES | I

NSTA

LLAT

IONS

9190

permeation with the audience members. The installation consists of an assembly – or “Aggregat” in German – of sixteen speakers. The speakers are placed at head height on stands and are arranged in a grid. The speaker-figures have moveable tops and are at the same time kinetic sculptures and sound diffusion devices.

The current collection of works contains compositions by a group of artists that are all active within the context of the ICST, the Institute for Computer Music and Sound Technology of the Zurich University of the Arts. Each artist’s work is intimately linked with their research activities and can be situated at the intersections between the empirical, objective approaches of research and experiential and singular processes of artistic work. An important specificity of this collection lies in the overlap of disciplines, methods, points of view and artistic domains. A common concern in the artistic practices of all participants are topics of systemic development, behavioural modelling, and algorithm-driven generative compositions.

The collection of works consists of the following pieces: • As If – Notions of There (Tobias Gerber) • Magnets (Daniel Bisig, Jan Schacher) • Neurons (Martin Neukom, Marcus Maeder) • Nothing Exists … (Jan Schacher) • Speak Up (Philippe Kocher) • The Left-hand Path (Marcus Maeder, Jan Schacher)

CHRIS MALLOY OPERATION DEEP POCKETS (2016)HAW, E45

Operation Deep Pockets is a video contemplation of decisionmaking and detachment. In August of 1964, U.S. President Lyndon Baines Johnson made a series of phone calls to direct airstrikes in Vietnam, and to order trousers. In Operation Deep Pockets, we hear audio derived from those phone calls, while wartime images punctuate the president’s dialogue with Secretary of Defense Robert McNamara.

GUIDO KRAMANN KANONSPIEL (2016)HAW, E46

A PC room is transformed into an audiovisual installation. The intention of this work is to bring ordinary technical artifacts (here PC room) in an unusual – musical – way to peoples mind to encourage them to a less functional and more human view even to machines. Each PC is composing in real time new phrases and puts them in a shared data base. This is done by a compositional algorithm which fulfills the following rules: The phrases have to be imitational canons itself. They have also to be counterpoints of the last two finished phrases in the data base even if those are shifted relatively towards the new phrase.

At the same time each PC is playing one phrase. After finishing an actual phrase a PC choses one of last three finished phrases in the data base and keeps on playing without break. So the musical performance has a repetitional character. But there is also a dramaturgy incorporated by new added phrases. This technique was inspired by the composition “in C” by Terry Riley. The entire screen of each PC is colored according to the pitch of the last played tone. By using a whole PC room of about 20 machines or more and starting the program not exactly synchronous the evolving performance remains to pieces for double and multiple chorus in a huge cathedral. Sound samples from xylophone, staccato violin and staccato accordion were used.

If the local situation does it permit on some of the PCs or alternatively on some additional tablet-PCs the software could be used in an interactive mode where visitors could type in their own phrases and add them to the data base. The software only accepts phrases according to the rules mentioned above and will give a feedback to the user if a phrase is inaccurate.

DANIEL BISIG, JAN SCHACHER, TOBIAS GERBER, PHILIPPE KOCHER, MARCUS MAEDER & MARTIN NEUKOMAGGREGAT (2015)HAW, E48

The installation ‘Aggregat’ serves as a platform for compositions that involve music and kinetic movement. It is meant to promote compositional approaches that blend formalised technical processes with poetic potentials developed through and expressed by the spatiality, corporeality and activity of the kinetic speaker and their mutual

Page 47: SMC - Ondiola · smc2016.net smc sound & cm ucsi ompnguti network conference s.t.r.e.a.m. festival conference guide abstract booklet smc hamburg/germany 31. –3.8

PROGRAM N

OTES | L

ISTEN

ING R

OOM

9392

Wuan-chin Li (Sandra Tavali), Yi-an Huang & Cheng-yen Yang: River

“River” is a computer music work calling for peace in both the environment and human societies. The creative concept was originally inspired by the work of photographer Yun Lin: Between the river and earth, the courage for being alive.

The sound of Oboe represents a river, which reflects the reality of human history. The sound generated by computer is the irrigation from river. It resonates from the earth to the whole universe – to bless the world return to its original serene state. (http://goo.gl/WCiiK6)

“River” was selected in 2015 ISMIR, Spain (The International Society of Music Information Retrieval) and featured in 2016 NYCEMF (New York City Electroacoustic Music Festival, also as part of the New York Philharmonic’s biennial, USA).

Jonathan WilsonTBA

AUGUST 31–SEPTEMBER 3LISTENING ROOMHAW, HR6 | 9 .00–18 .00Simone PappalardoCol corpo fare ritorno

A composition for flutes and self designed instruments build with objects left by the sea undertow. Dedicated to all migrants of Sicily.

Anna TerzaroliDark Path # 6

Gil DoriProportarum

Hans-Gunter LockEcho of Cassandra

Luong Hue TrinhIllusions

Massimo AvantaggioPenumbra

Sever TipeiBig Guizmo

Sever TipeiQuilt

Ji Won Yoon & Woon Seung YeoMemorabilia 2016

Uhrzeit?

Page 48: SMC - Ondiola · smc2016.net smc sound & cm ucsi ompnguti network conference s.t.r.e.a.m. festival conference guide abstract booklet smc hamburg/germany 31. –3.8

NOTES

9594

Page 49: SMC - Ondiola · smc2016.net smc sound & cm ucsi ompnguti network conference s.t.r.e.a.m. festival conference guide abstract booklet smc hamburg/germany 31. –3.8

NOTES

9796

Page 50: SMC - Ondiola · smc2016.net smc sound & cm ucsi ompnguti network conference s.t.r.e.a.m. festival conference guide abstract booklet smc hamburg/germany 31. –3.8

NOTES

9998

Page 51: SMC - Ondiola · smc2016.net smc sound & cm ucsi ompnguti network conference s.t.r.e.a.m. festival conference guide abstract booklet smc hamburg/germany 31. –3.8

SMC2016.net