articulating microtime

7
The MIT Press is collaborating with JSTOR to digitize, preserve and extend access to Computer Music Journal. http://www.jstor.org Articulating Microtime Author(s): Horacio Vaggione Source: Computer Music Journal, Vol. 20, No. 2 (Summer, 1996), pp. 33-38 Published by: The MIT Press Stable URL: http://www.jstor.org/stable/3681329 Accessed: 17-03-2015 23:47 UTC Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. This content downloaded from 130.216.30.130 on Tue, 17 Mar 2015 23:47:55 UTC All use subject to JSTOR Terms and Conditions

Upload: nick-marsh

Post on 16-Nov-2015

29 views

Category:

Documents


5 download

DESCRIPTION

electronic music

TRANSCRIPT

  • The MIT Press is collaborating with JSTOR to digitize, preserve and extend access to Computer Music Journal.

    http://www.jstor.org

    Articulating Microtime Author(s): Horacio Vaggione Source: Computer Music Journal, Vol. 20, No. 2 (Summer, 1996), pp. 33-38Published by: The MIT PressStable URL: http://www.jstor.org/stable/3681329Accessed: 17-03-2015 23:47 UTC

    Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp

    JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of contentin a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship.For more information about JSTOR, please contact [email protected].

    This content downloaded from 130.216.30.130 on Tue, 17 Mar 2015 23:47:55 UTCAll use subject to JSTOR Terms and Conditions

    http://www.jstor.orghttp://www.jstor.org/action/showPublisher?publisherCode=mitpresshttp://www.jstor.org/stable/3681329http://www.jstor.org/page/info/about/policies/terms.jsphttp://www.jstor.org/page/info/about/policies/terms.jsp

  • Horacio Vaggione Universite de Paris, VIII F-93520, Saint Denis, France

    Articulating Microtime

    Computers and Music as a Complex System

    "Computers are not primarily used for solving well-structured problems ... but instead are compo- nents in complex systems" (Winograd 1979). Music composition can be envisioned as one of these com- plex systems, in which the processing power of computers is dealing with a variety of concrete ac- tions involving multiple time scales and levels of representation.

    The intersection of music and computers has cre- ated a huge collection of possibilities for research and production. This field represents perhaps one of the highest areas of cultural vitality of our time. It would be somewhat presumptuous to intend to sum up such richness in a few lines. Hence I will dedicate this article to surveying some of the musi- cally significant consequences of the introduction of the digital tools in the field of sound processing, allowing musicians, for the first time, to articu- late-to compose-at the level of microtime, that is, to elaborate a sonic syntax.

    Surface Versus Internal Processing To clarify these notions, consider using a MIDI note processor (a typical macrotime protocol) and increasing the density of notes per second to the maximum that it can handle. In this way, we can obtain very rich granular surface textures, and even provoke morphological changes in the spectral do- main as side effects of these surface movements. However, we cannot, by this procedure alone, di- rectly reach the level of microtime, by which I mean we cannot explicitly analyze or control the time-varying distribution of the spectral energy.

    The difference between surface and internal pro- cessing is well understood today. We can recall, among the disciplines studying the macroscopic do- main, the recent developments of a macrophysics

    of granular matter (Guyon and Troadec 1994), aim- ing to define its territory taking distance from both micro-physics and chemical analysis-synthesis. But already, Antoine Lavoisier has clearly traced the edge between these domains (Lavoisier 1789):

    Granulating and powdering are, strictly speak- ing, nothing other than mechanical prelimi- nary operations, the object of which is to di- vide, to separate the molecules of a body and to reduce them to very fine particles. But so long that one can push forward these opera- tions, they cannot reach the level of the inter- nal structure of the body: they cannot even break their aggregate itself; thus every mole- cule, after granulation, still resembles the origi- nal body. This contrast with the true chemical operations, such as, for example, dissolution, which changes intimately the structure of the body.

    Naturally, once this distinction is clearly stated, there is room to define all kinds of intermediary (fractional) levels where the different domains can interact. To refer again to our example concerning MIDI macro-processing, that we can bring about changes in the spectral domain as side effects of surface movements can be useful if we also have the necessary tools to analyze and resynthesize the morphologies thus obtained. What is interesting for music composition is the possibility of elaborate syntaxes that might take into account the different time levels, without trying to make them uniform. In fact, the sense of any compositional action con- scientiously articulating relations between differ- ent time levels depends essentially on the general paradigm adopted by the composer. Evidently, he or she must make a coherent decision concerning the status and the nature of the levels involved. This means placing them in a continuum organized as a linear hierarchy, or assuming the existence of discontinuities--or simply non-linearities-and then considering microtime, macrotime, and all intermediary dimensions as relative-even if well-defined-domains.

    Computer Music Journal, 20:2, pp. 33-38, Summer 1996 @ 1996 Massachusetts Institute of Technology

    Vaggione 33

    This content downloaded from 130.216.30.130 on Tue, 17 Mar 2015 23:47:55 UTCAll use subject to JSTOR Terms and Conditions

    http://www.jstor.org/page/info/about/policies/terms.jsp

  • In this article I will first recall some of the steps leading to control over the microtime domain as a compositional dimension, citing some examples of multi-scale approaches deriving from this per- spective.

    The Edge

    When computers were first introduced, the musical field was concerned only with composition at the level of macrotime-composing with sounds, with no attempt to compose the sounds themselves. This holds true even in the case of early musique concrete, which basically consisted of selecting re- corded sounds and combining them by mixing and splicing. Operations in the spectral domain were re- duced to imprecise analog filtering and transposi- tion of tonal pitch by means of the variable-speed recorder, which never allows the separation of the time and spectral domains, and only attains spec- tral redistributions in a casual way.

    "Electronic music," as developed in the West German radio studio in Cologne (Eimert and Stock- hausen 1955), did have the ambition of composing the sound material after the assumptions of para- metric serialism, theoretically appropriate to be transferred to the level of the "internal structure of the body," as Antoine Lavoisier would say. However, the technique at hand, being approximate as it was purely analog, was in fact contradicting these as- sumptions.

    Analog modular synthesizers improved the user interface, but were especially inconvenient due to their lack of memory. The control operations pos- sible with them were not supported by any compo- sition theory; articulation (which is mainly a mat- ter of local and detailed definition of unary shapes) was not allowed beyond the case of some simple (and yet difficult to quantify) inter-modulations.

    It was only the development of digital synthesis, as pioneered by Max Mathews (1963, 1969), that fi- nally allowed composers to reach the level of micro- time, that is, to have access to the internal struc- ture of sound. One of the first approaches to dynamic spectral modeling to emerge from the Ma- thews digital synthesis system was developed by

    Jean-Claude Risset. In this work, trumpet tones were analyzed/synthesized by means of additive clustering of partials whose temporal behavior was represented by piecewise linear segments, i.e., artic- ulated amplitude envelopes. Given the complexity of the temporal features imbedded in natural sounds, reproduction of all these features was an impossible task. Risset therefore applied a data- reduction procedure rooted in perceptual judg- ment-what he called analysis by synthesis, re- verting the normal order of these terms (Risset 1966, 1969, 1991; Risset and Wessel 1982). Beyond its success in imitating existing sounds, the histori- cal importance of the Risset model resides in the formulation of an articulation technique at the mi- crotime level, giving birth to a new approach for dealing with the syntax of sound.

    The panoply of digital synthesis and processing methods that we have at our disposal today is rooted in the foundations provided by Max Ma- thews and the first sonic-syntactical experiences of Jean-Claude Risset. Global synthesis techniques such as frequency modulation (Chowning 1973) and waveshaping (Arfib 1978; Lebrun 1979) share these roots. Long-time considered only as formulae for driving synthesis processes (in a non-analytical manner), they have recently been reconsidered as non-linear methods of sound transformation, strongly linked to spectral analysis (Vaggione 1985; Beauchamp and Horner 1992; Kronland-Martinet and Guillemain 1993).

    On the other hand, the morphological approach derived from the qualitative (non-parametric) as- sumptions of Pierre Schaeffer (1959, 1966) has been passing the time barrier, having been given access to microtime control since the development of Ma- thews's digital system. In the mid-1970s, the Group de Recherches Musicales in Paris developed a digi- tal studio that had as a goal the transfer to algorith- mic form the strategies developed previously with analog means (Maillard 1976). Specifically, the goal was to process natural sounds, carrying this pro- cessing to "the internal structure of the bodies" in a way never envisaged with the former analog tech- niques. That trend continued with the SYTER real- time processor (Allouis 1984) and the recent DSP- based tools (Teruggi 1995). We can here recall also

    34 Computer Music Journal

    This content downloaded from 130.216.30.130 on Tue, 17 Mar 2015 23:47:55 UTCAll use subject to JSTOR Terms and Conditions

    http://www.jstor.org/page/info/about/policies/terms.jsp

  • the work of Denis Smalley on what he has called Spectro-morphology (Smalley 1986).

    I have myself employed parametric (elementary) and morphological (figurative) strategies combined into the same compositional process to link fea- tures belonging to different time domains. An early example of this is described in (Vaggione 1984), and some of the conditions allowing one to think of the numerical sound object as a transparent category for sonic design are stated in (Vaggione 1991). An- other approach rooted on the idea of sound object as a main category for representing musical signals is being developed around the Kyma music lan- guage (Scaletti 1989; Scaletti and Hebel 1991). This is an important area of experience where the idea of sound object meets some of the assumptions un- derlying the object-oriented programming paradigm (Pope 1991, 1994).

    The MAX block-diagram graphic language devel- oped at IRCAM (Puckette 1988) was strongly in- spired by Mathews's family of programs. It has been used to define complex interactions using MIDI note processing (a typical macrotime protocol, as we noted) and finally crossing the edge of micro- time with the addition of signal processing objects (Puckette 1991; Settel and Lippe 1994). This allows one to create control structures that include sig- nificant bridges between different time scales.

    New Representations of Sound

    Accessing the microtime domain has confronted composers with the necessity of using a variety of sound representations. A survey of this subject must include the important work of Denis Gabor (1946, 1947), who was perhaps the first to propose a method of sound analysis derived from quantum physics. Mr. Gabor followed Norbert Wiener's prop- ositions of 1925 (see Wiener 1964) about the neces- sity of assuming the existence in the field of sound of an uncertainty problem concerning the correla- tion between time and pitch (similar to the one stated by Heisenberg, regarding the correlation be- tween the velocity and position of a given particle). From this, Denis Gabor proposed to merge the two classic representations (the time-varying wave-form

    and the static frequency-based Fourier transform) into a single one, by means of concatenated short windows or "grains." These grains do not have the same status as the MIDI note-grains discussed ear- lier, since they constitute an analytical expanse into the microtime domain.

    Meanwhile, the engineering community had been improving techniques for traditional Fourier analysis, attenuating its static nature by taking many snapshots of a signal during its evolution. This technique became known as the "short-time Fourier transform" (see e.g., Moore 1979). However, the Gabor transform still remains conceptually in- novative, because it presents a two-dimensional space of description (Arfib 1991). This original para- digm, theoretically explored by Iannis Xenakis (Xe- nakis 1971), has been taken as the starting point for developing granular synthesis (Roads 1978), and, later, the wavelet transform (Kronland-Martinet 1988).

    While the first granular-synthesis technique used a stochastic approach (Roads 1988; Truax 1988) and hence did not touch the problem of frequency-time local analysis and control-though this aspect was considered later (Roads 1991)-the wavelet trans- form gave one a straightforward analytical orienta- tion. The main difference between the wavelet transform and the original Gabor transform is that in the later, the actual changes are analyzed with a grain of unvarying size, whereas in the wavelet transform, the grain (the analyzing wavelet) can fol- low these changes (this is why it is said to be a time-scale transform).

    The wavelet analytic approach, while still in the beginning of its application to sound processing, is interesting also because it is being applied in other fields; for example, in modeling physical problems such as fully developed turbulence, and analyzing multi-fractal formalisms (Arneodo 1995; Mallat 1995). Thus it contributes to extend the study of non-linear systems, where the problem of scaling is crucial. The somewhat artificial attempts made to date to relate chaos theory to algorithmic music production can find here a significant bridge be- tween different levels of description of time-varying sonic structures.

    It is to be stressed that all these new develop-

    Vaggione 35

    This content downloaded from 130.216.30.130 on Tue, 17 Mar 2015 23:47:55 UTCAll use subject to JSTOR Terms and Conditions

    http://www.jstor.org/page/info/about/policies/terms.jsp

  • ments are in fact enriching the traditional Fourier paradigm, rather than replacing it. In other words, they do not free us of the uncertainty problem con- cerning the correlation between time and pitch, but rather give a larger framework in which to deal with it.

    Another recent technique used to explicitly con- front the basic acoustic dualism was developed by Xavier Serra and Julius Smith (1990). They pro- posed a "spectral modeling synthesis" approach, based on a combination of a deterministic and a stochastic decomposition. The deterministic part included the representation of Fourier-like compo- nents (harmonic and inharmonic) in terms of sepa- rate sinusoidal components evolving in time, and the stochastic part provided what was not captured within the Fourier paradigm, namely, the noise ele- ments, often present in the attack portion, but also throughout the production of a sound (think of the noise produced by a bow, or by the breath, etc.).

    The mention of these latter elements leads us to recall the existence of another different approach to sound analysis and synthesis, which cannot be char- acterized in terms of spectral modeling, but must be identified by physical modeling. Pioneered by the work of Lejaren Hiller and Pierre Ruiz (1971) and later expanded by Claude Cadoz and his col- leagues (1984), today this approach has a consider- able following, with many systems attempting its development (Smith 1992; Morrison and Adrien 1993; Cadoz et al. 1994).

    I regard physical modeling as a field in itself, which seeks to model the source of a sound, and not its acoustic structure. However, I think it gives a complementary and significant picture of sound as an articulated phenomenon. Physical modeling can be effective in creating very interesting sounds by extending and transforming the causal attri- butes of the original models. In turn, it lacks acous- tical and perceptual analytic power on the side of the sonic results. Spectral modeling brings us the tools for such analysis, even if we have to pay for this facility by facing certain difficulties in dealing with typical time-domain problems. In spite of these difficulties, spectral modeling has the advan- tage of its strong link with a long practice, that of harmonic analysis, and hence the power to give an

    effective framework in which to connect surface harmony (the tonal pitch domain) with timbre (the spectral frequency/time domain).

    MultiScale Approaches: Beyond Microtime In any case, it is quite possible that, in years to come, the two main paradigms--spectral and physi- cal modeling-will be increasingly developed into one comprehensive field of sound analysis, synthe- sis, and transformation. To reach this goal, it is per- haps pertinent to introduce simultaneously a third analytical field based on a hierarchic syntactic ap- proach (Strawn 1980; Vaggione 1994). This ap- proach can serve as a framework for articulating the different dimensions manipulated by the con- current models, as well as deal with the many non- linearities that arise between microtime and mac- rotime structuring. Object-oriented software tech- nology can be utilized here to encapsulate features belonging to different time levels, making them cir- culate in a unique, multi-layered, compositional network (Vaggione 1991).

    Moreover, there are in progress several comple- mentary approaches dealing with intermediary scales relating microtime and macrotime features, such as Larry Polansky's morphological mutation functions (Polansky 1991, 1992), and Curtis Roads's pulsar synthesis (Roads 1995). One can cite as well-among others-some recent integrated sys- tems, such as Common Music/Stella (Taube 1991, 1993), or the ISPW software (Lippe and Puckette 1991). These systems support different multi-scale approaches to composition, allowing a parallel ar- ticulation of different-and not always linearly related-time levels, defining specific types of interaction, and amplifying the space of the composable.

    Having reached microtime, we can now project our findings to the whole compositional process, covering all possible time levels that can be inter- actively defined and articulated. This situation, as Otto Laske says (Laske 1991a, but see also Laske 1991 b) "paves the way for musical software that not only supports creative work on the microtime level, but also allows for acquiring empirical knowl-

    36 Computer Music Journal

    This content downloaded from 130.216.30.130 on Tue, 17 Mar 2015 23:47:55 UTCAll use subject to JSTOR Terms and Conditions

    http://www.jstor.org/page/info/about/policies/terms.jsp

  • edge about a composer's work at that level, with an ensuing benefit for defining intelligent sound tools, and for a more sophisticated theory of sonic design."

    References

    Allouis, J. E 1984. "Logiciels pour le systeme temps-reel SYTER." In Proceedings of the 1984 International Computer Music Conference. San Francisco: Interna- tional Computer Music Association.

    Arfib, D. 1978. "Digital Synthesis of Complex Spectra by Means of Non-Linear Distorted Sine Waves." In Pro- ceedings of the 1978 International Computer Confer- ence. San Francisco: International Computer Music As- sociation.

    Arfib, D. 1991. '"Analysis, Transformation, and Resynthe- sis of Musical Sounds with the Help of a Time- Frequency Representation." In De Poli, Picialli, and Roads, eds. Representations of Musical Signals. Cam- bridge, Massachusetts: MIT Press.

    Arneodo, A. 1995. Ondelettes, multifractales et turbu- lences. Paris: Diderot.

    Beauchamp, J., and A. Horner. 1992. "Extended Nonlin- ear Waveshaping Analysis/Synthesis Technique." In Pro- ceedings of the 1992 International Computer Music Conference. San Francisco: International Computer Music Association.

    Cadoz, C., et al. 1984. "Responsive Input Devices and Sound Synthesis by Simulation of Instrumental Mecha- nisms." Computer Music Journal 8(3). Reprinted in C. Roads, ed. 1989. The Music Machine. Cambridge, Mas- sachusetts: MIT Press.

    Cadoz, C., et al. 1994. "Physical Models for Music and Animated Image." In Proceedings of the 1994 Interna- tional Music Conference. San Francisco: International Computer Music Association.

    Chowning, J. 1973. "The Synthesis of Complex Audio Spectra by Means of Frequency Modulation." Com- puter Music Journal 1(2):46-54.

    Eimert, H. and K. Stockausen, eds. 1955. Elektronische Musik. Die Reihe 1. Vienna: Universal Edition.

    Gabor, D. 1946. "Theory of Communication." Journal of the Institute of Electrical Engineering 93:4-29.

    Gabor, D. 1947. '"Acoustical Quanta and the Theory of Hearing." Nature 159:303.

    Guyon, E., and J. Troadec. 1994. Du sac de billes au tas de sable. Paris: Editions O. Jacob.

    Hiller, L., and P. Ruiz. 1971. "Synthesizing Sounds by

    Solving the Wave Equation for Vibrating Objects." Jour- nal of the Audio Engineering Society 19:463-470.

    Kronland-Martinet, R. 1988. "The Wavelet Transform for Analysis, Synthesis, and Processing of Speech and Mu- sical Sounds." Computer Music Journal 12(4).

    Kronland-Martinet, R., and Ph. Guillemain. 1993. "Towards Non-Linear Resynthesis of Instrumental Sounds." In Proceedings of the 1993 International Computer Music Conference. San Francisco: Interna- tional Computer Music Association.

    Laske, O. 1991a. "Composition Theory: Introduction to the Issue." Interface 20(3/4):125-136.

    Laske, O. 1991b. "Toward an Epistemology of Composi- tion." Interface 20(3/4):235-269.

    Lavoisier, C. 1789. Traite elementaire de chimie. Quoted in Guyon and Troadec 1994, p. 18 (citation above).

    Le Brun, M. 1979. "Digital Waveshaping Synthesis." Jour- nal of the Audio Engineering Society 2(1):250-266.

    Lippe, C., and M. Puckette. 1991. "Musical Performance Using the IRCAM Worksation." In Proceedings of the 1991 International Computer Music Conference. San Francisco: International Computer Music Associ- ation.

    Maillard, B. 1976. Les Extensions de Music V Cahiers Recherche-musique 3. Paris: Ina-GRM.

    Mallat, S. 1995. Traitement du signal: des ondes planes aux ondelettes. Paris: Diderot.

    Mathews, M. 1963. "The Digital Computer as a Musical Instrument." Science (142).

    Mathews, M. 1969. The Technology of Computer Music. Cambridge, Massachusetts: MIT Press.

    Moore, E R. 1979. "An Introduction to the Mathematics of Digital Signal Processing." Computer Music Journal 2(2):38-60.

    Morrison, J., and J.-M. Adrien. 1993. "MOSAIC: A Frame- work for Modal Synthesis." Computer Music Journal 17(1):45-56.

    Polansky, L. 1992. "More on Morphological Mutation Functions. Recent Techniques and Developments." In Proceedings of the 1992 International Computer Mu- sic Conference. San Francisco: International Computer Music Association.

    Polansky, L., and M. McKinney. 1991. "Morphological Mutation Functions." In Proceedings of the 1991 Inter- national Computer Music Conference. San Francisco: International Computer Music Association.

    Pope, S. ed. 1991. The Well-Tempered Object: Musical Applications of Object-Oriented Software Technology Cambridge, Massachusetts: MIT Press.

    Pope, S. 1994. "The Musical Object Development Envi- ronment: MODE (Ten Years of Music Software in

    Vaggione 37

    This content downloaded from 130.216.30.130 on Tue, 17 Mar 2015 23:47:55 UTCAll use subject to JSTOR Terms and Conditions

    http://www.jstor.org/page/info/about/policies/terms.jsp

  • Smalltalk)." In Proceedings of the 1994 International Computer Music Conference. San Francisco: Interna- tional Computer Music Association.

    Puckette, M. 1988. "The Patcher." In Proceedings of the 1988 International Computer Music Conference. San Francisco: International Computer Music Association.

    Puckette, M. 1991. "Combining Event and Signal Pro- cessing in the MAX Graphical Programming Environ- ment." Computer Music Journal 15(3):68-77.

    Risset, J. C. 1966. Computer Study of Trumpet Tones. Murray Hill, New Jersey: Bell Telephone Laboratories.

    Risset, J. C. 1969. An Introductory Catalog of Computer- Synthesized Sounds. Murray Hill, New Jersey: Bell Telephone Laboratories.

    Risset, J. C. 1991. Timbre Analysis by Synthetic Repre- sentations, Imitations, and Variants for Musical Com- position. In De Poli, Piccialli, and Roads, eds. Represen- tations of Music Signals. Cambridge, Massachusetts: MIT Press.

    Risset, J. C., and D. Wessel. 1982. "Exploration of Timbre by Analysis and Synthesis." In D. Deutsch, ed. The Psy- chology of Music. New York: Academic Press.

    Roads, C. 1978. 'Automated Granular Synthesis of Sound." Computer Music Journal 2( 2):61-62.

    Roads, C. 1988. "Introduction to Granular Synthesis." Computer Music Journal 12(2).

    Roads, C. 1991. '"Asynchronous Granular Synthesis." In De Poli, Piccialli, and Roads, eds. Representations of Music Signals. Cambridge, Massachusetts: MIT Press.

    Roads, C. 1995. Pulsar Synthesis. Unpublished manu- script.

    Scaletti, C. 1989. "The Kyma-Platypus Computer Music Workstation." Computer Music Journal 13(2):23-38.

    Scaletti, C., and K. Hebel. 1991. "An Object-Based Repre- sentation for Digital Audio Signals." In De Poli, Pici- alli, and Roads, eds. Representations of Musical Sig- nals. Cambridge, Massachusetts: MIT Press.

    Schaeffer, P. 1959. A la recherche d'une musique con- crete. Paris: Seuil.

    Schaeffer, P. 1966. Traite des objets musicaux. Paris: Seuil.

    Serra, X., and J. O. Smith. 1990. "Spectral Modeling Syn- thesis: A Sound Analysis/Synthesis System Based on a Deterministic Plus Stochastic Decomposition." Com- puter Music Journal 16(4):12-24.

    Settel, Z., and C. Lippe. 1994. "Real-Time Musical Appli- cations using FFT-based Resynthesis." In Proceedings of the 1994 International Computer Music Confer-

    ence. San Francisco: International Computer Music As- sociation.

    Smalley, D. 1986. "Spectromorphology and Structuring Processes." In S. Emmerson, ed. The Language of Elec- troacoustic Music. Basingstocke, UK: Macmillan.

    Smith, J. 0. 1992. "Physical Modeling Using Digital Waveguides." Computer Music Journal 16(4):74-91.

    Strawn, J. 1980. "Approximation and Syntactic Analysis of Amplitude and Frequency Function for Digital Sound Synthesis." Computer Music Journal 4(3). Re- printed in C. Roads, ed. 1989. The Music Machine. Cambridge, Massachusetts: MIT Press.

    Taube, H. 1991. "Common Music: A Music Composition Language in Common Lisp and CLOS." Computer Mu- sic Journal 15(2):21-32.

    Taube, H. 1993. "Stella: Persistent Score Representation and Score Editing in Common Music." Computer Mu- sic Journal 17(4).

    Teruggi, D. 1994. "The Morpho Concepts: Trends in Soft- ware for Acousmatic Music Composition." In Proceed- ings of the 1994 International Computer Music Confer- ence. San Francisco: International Computer Music Association.

    Truax, B. 1988. "Real-Time Granular Synthesis with a Digital Signal Processor." Computer Music Journal 12(2):14-26.

    Vaggione, H. 1984. "The Making of Octuor." Computer Music Journal 8(2). Reprinted in C. Roads, ed. 1989. The Music Machine. Cambridge, Massachusetts: MIT Press.

    Vaggione, H. 1985. Transformations spectrales dans la composition de Thema. Paris: Rapport interne IRCAM.

    Vaggione, H. 1991. '"A Note on Object-Based Composi- tion." In O. Laske, ed. "Composition Theory." Interface 20(3/4):209-216.

    Vaggione, H. 1994. "Timbre as Syntax: A Spectral Model- ing Approach." In S. Emmerson, ed. "Timbre in Electro- acoustic Music Composition." Contemporary Music Review 8(2):91-104.

    Wiener, N. 1964. "Spatio-Temporal Continuity, Quantum Theory and Music." In M. Capek, ed. The Concepts of Space and Time. Boston, Massachusetts: Reidel.

    Winograd, T. 1979. "Beyond Programming Languages." Communications of the Association for Computing Machinery 22(7):391-401.

    Xenakis, I. 1971. Formalized Music. Bloomington, Indi- ana, USA: Indiana University Press.

    38 Computer Music Journal

    This content downloaded from 130.216.30.130 on Tue, 17 Mar 2015 23:47:55 UTCAll use subject to JSTOR Terms and Conditions

    http://www.jstor.org/page/info/about/policies/terms.jsp

    Article Contentsp. 33p. 34p. 35p. 36p. 37p. 38

    Issue Table of ContentsComputer Music Journal, Vol. 20, No. 2 (Summer, 1996), pp. 1-141Front Matter [pp. 2-3]About This Issue [p. 1]Review: Erratum: Press RETURN: The Palindrome Dance Company [p. 3]Editor's Notes: Music and Audio Technology Projects to Stir Your Imagination [pp. 4-6]LetterMusic and Audio Technology Projects to Stir Your Imagination [pp. 7-8]

    Aldo Piccialli 1995 [pp. 9-10]Pierre Schaeffer, 1910-1995: The Founder of "Musique Concrte" [pp. 10-11]Announcements [pp. 11-12]News [pp. 13-16]Composition and Performance in the 1990sComputer Music for Compact Disc: Composition, Production, Audience [pp. 17-27]

    The State of the Art in Computer MusicBringing Digital Music to Life [pp. 28-32]Articulating Microtime [pp. 33-38]In Search of New Sounds [pp. 39-43]Physical Modeling Synthesis Update [pp. 44-56]

    Synthesis and TransformationDouble-Modulator FM Matching of Instrument Tones [pp. 57-71]Piecewise-Linear Approximation of Additive Synthesis Envelopes: A Comparison of Various Methods [pp. 72-95]

    Music RepresentationThe Music Structures Approach to Knowledge Representation for Music Processing [pp. 96-111]

    ReviewsEventsReview: Two Reports of the 1995 International Computer Music Conference (ICMC) [pp. 112-120]Review: The International Congress in Music and Artificial Intelligence [pp. 120-122]Review: untitled [pp. 122-124]

    RecordingsReview: untitled [pp. 124-127]Review: untitled [pp. 127-128]Review: untitled [p. 128]Review: untitled [pp. 128-129]

    ProductsReview: Radius Video Vision Studio, Macintosh Audio/Video Digitizing System [pp. 129-130]

    Products of Interest: New Product Announcements [pp. 131-140]Back Matter [pp. 141-141]