Works for percussion and computer-based live

Transcrição

Works for percussion and computer-based live
Works for percussion and computer-based live electronics:
aspects of performance with technology
Fernando de Oliveira Rocha
Schulich School of Music
McGill University
Montreal, Canada
April 2008
A doctoral paper submitted to McGill University in partial fulfillment of the
requirements of the degree of Doctor of Music
© 2008 Fernando Rocha
Acknowledgments
First of all I would like to thank the Brazilian Agency CAPES (Coordenação de
Aperfeiçoamento de Pessoal de Nível Superior), which has financially supported my doctoral
studies. I am also grateful to UFMG (Universidade Federal de Minas Gerais) for granting my
leave during the last four years.
My gratitude also goes to my doctoral committee which includes my percussion instructor,
D’Arcy Philip Gray; my music research advisor, Prof. Sean Ferguson; and Professors Marcelo
Wanderley, Fabrice Marandola, and Eleanor Stubley. In addition, the Percussion Area Chair,
Prof. Aiyun Huang, has been extremely supportive of both my performance and research.
I would like to acknowledge the McGill Percussion Area, DCS (the Digital Composition
Studios), IDMIL (the Input Devices and Music Interaction Laboratory), CIRMMT (the Centre
for Interdisciplinary Research in Music Media and Technology), and the McGill Digital
Orchestra Project.
In the realization of my lecture-recital and in the preparation of this paper the following
people were very helpful: composers Geof Holbrook, Sérgio Freire, and Martin Matalon;
collaborators Joseph Malloch, Anthony Tan and Kensuke Irie. Thanks also to Elise Pittenger for
her support and proofreading.
Finally, I would like to express sincere gratitude to my family for many years of support and
encouragement.
Abstract
In electronic music, ‘mixed works’ are pieces that combine the live performance of one or
more acoustic instruments with sounds created, processed, or reproduced electronically. The
term ‘computer-based live-electronics’ refers to mixed works in which the electronic part is
controlled – in real time - by a computer system. Live-electronic performance is often called
interactive performance, since there is normally a high degree of interaction between the
performer and electronics.
This paper addresses issues related to the performance of mixed works for percussion and
computer based live-electronics. Its main focus is performance practice issues, particularly those
decisions that are affected by the presence of electronics. Four pieces are used to illustrate the
discussion:
1 – Wooden Stars (2006 - rev. 2008), Geof Holbrook
2 – Anamorfoses (2008), Sérgio Freire
3 – Traces IV (2006 - rev. 2008), Martin Matalon
4 – Improvisation for hyper-kalimba (2008), Fernando Rocha
Performance of these works requires the use of different electronic devices, such as
microphones, mixing board, audio interface, loudspeakers, MIDI interface, and computers. The
most common software used for interactive pieces is MAX/MSP. The relation to the equipment
and to specific characteristics of this repertoire imposes new challenges to performers and may
require new skills, including new performance gestures and basic skills of music technology.
Moreover, performers often have to adjust their playing to ensure that the interactive system will
work properly. In fact, the performer should assume a more active role in this repertoire, since
the electronic part of an interactive work can be directly influenced by the way he or she plays.
1
Abrégé
Dans la musique électronique, les pièces mixtes sont des compositions qui combinent
l’interprétation en temps réel sur un ou plusieurs instruments acoustiques avec des sons créés,
transformés, ou reproduits électroniquement. Le terme ‘computer-based live-electronics’
(‘électronique en temps réel contrôlée par système informatique’) est utilisé pour qualifier les
pièces mixtes dans lesquelles la partie électronique est contrôlée – en temps réel – par un
système informatique. L’interprétation avec électronique en temps réel est souvent appelée
interprétation interactive puisque le niveau d’interaction entre l’interprète et le dispositif
électronique y est généralement est très élevé.
Cet article explore les différents problèmes liés à l’interprétation de pièces mixtes pour
percussion et électronique en temps réel contrôlée par système informatique. Son objectif
principal est de traiter les questions d’interprétation reliées à la présence du dispositif
électronique. Quatre pièces sont utilisées pour illustrer l’argumentation:
1 – Wooden Stars (2006 - rev. 2008), Geof Holbrook
2 – Anamorfoses (2008), Sérgio Freire
3 – Traces IV (2006 - rev. 2008), Martin Matalon
4 – Improvisation for hyper-kalimba (2008), Fernando Rocha
L’interprétation de ces oeuvres requiert l’utilisation de différents dispositifs électroniques tels
que microphones, table de mixage, interface audio, enceintes acoustiques, interface MIDI, et un
ordinateur. Le logiciel utilisé le plus communément est MAX/MSP. La relation au dispositif
électronique et aux caractéristiques spécifiques de ce nouveau répertoire pose de nouveaux défis
à l’interprète et nécessite de nouvelles compétences, parmi lesquelles l’apprentissage de
nouveaux gestes musicaux et une connaissance élémentaire de la technologie de la musique. De
plus, l’interprète est souvent amené à adapter son jeu afin que le dispositif électronique puisse
fonctionner correctement. L’interprète se doit ainsi d’assumer un rôle plus actif dans ce type de
répertoire puisque la partie électronique peut être directement influencée par son jeu.
2
Contents
1.
INTRODUCTION..................................................................................................................................... 5
2.
MIXED WORKS....................................................................................................................................... 8
2.1. HISTORICAL BACKGROUND.................................................................................................................. 8
2.2. MIXED WORKS FOR PERCUSSION AND ELECTRONICS: REPERTOIRE ............................................ 10
2.2.1. MIXED WORKS WITH FIXED ELECTRONIC MEDIA ......................................................................... 10
Works with tape produced in the early studios ............................................................................ 10
2.2.2. MIXED WORKS WITH LIVE ELECTRONICS...................................................................................... 13
John Cage’s early experiments...................................................................................................... 13
Works with live electronics using analog devices ....................................................................... 14
Computer-based live-electronic music ......................................................................................... 14
3. ASPECTS OF TECHNOLOGY IN MIXED WORKS WITH COMPUTER-BASED
ELECTRONICS ............................................................................................................................................. 17
3.1. HARDWARE .......................................................................................................................................... 17
3.2. SOFTWARE FOR INTERACTIVE MUSIC ............................................................................................... 18
4. MUSICAL ISSUES IN THE INTERACTION BETWEEN PERFORMER AND
ELECTRONICS ............................................................................................................................................. 20
4.1. INTERACTIVE SYSTEMS ....................................................................................................................... 20
4.2. FORMS OF IMPLEMENTING INTERACTIVE SYSTEMS AND PERFORMANCE ISSUES ...................... 22
4.2.1. PERFORMING WITH FIXED MEDIA .................................................................................................. 23
4.2.2. PERFORMING WITH LIVE ELECTRONICS ........................................................................................ 25
Human inputs ................................................................................................................................. 25
Live-electronic performance: forms of interaction ...................................................................... 28
5.
LOOKING AT THE PIECES ............................................................................................................... 30
5.1. GEOF HOLBROOK: WOODEN STARS (2006 - REV. 2008)................................................................. 30
5.1.1. GENERAL INFORMATION ................................................................................................................ 30
5.1.2. THE ROLE OF TECHNOLOGY ........................................................................................................... 31
5.1.3. PERFORMANCE ISSUES ................................................................................................................... 33
5.2. SÉRGIO FREIRE: ANAMORFOSES (2007)............................................................................................ 35
5.2.1. GENERAL INFORMATION ................................................................................................................ 35
5.2.2. THE ROLE OF TECHNOLOGY ........................................................................................................... 35
5.2.3. PERFORMANCE ISSUES ................................................................................................................... 36
5.3. MARTIN MATALON: TRACES IV (2006 - REV. 2008) ....................................................................... 40
5.3.1. GENERAL INFORMATION ................................................................................................................ 40
5.3.2. THE ROLE OF TECHNOLOGY ........................................................................................................... 41
5.3.3. PERFORMANCE ISSUES ................................................................................................................... 42
3
5.4. FERNANDO ROCHA: IMPROVISATION FOR HYPER-KALIMBA (2008) .............................................. 44
5.4.1. GENERAL INFORMATION ................................................................................................................ 44
An overview of the concept of Digital Musical Instrument:....................................................... 44
5.4.2. THE ROLE OF TECHNOLOGY ........................................................................................................... 46
5.4.3. PERFORMANCE ISSUES ................................................................................................................... 49
6.
CONCLUSION........................................................................................................................................ 52
6.1. POSSIBLE RESPONSES TO THE INITIAL QUESTIONS ......................................................................... 52
6.2. FINAL THOUGHTS ................................................................................................................................ 54
7.
REFERENCES ........................................................................................................................................ 56
APPENDIX: LIST OF EQUIPMENT......................................................................................................... 60
4
1. Introduction
The use of electronics in new music is becoming more and more prevalent. This is a trend
that some musicologists have foreseen for some decades. Already in 1979, Paul Griffiths stated:
There is every reason to believe that the contribution of electronics to contemporary
music will grow. Certainly it is no longer possible to dismiss electronic music as an
esoteric sideline. (Griffiths, 1979, 11)
In fact, electronic music is today a widely accepted part of contemporary music, even though
some musicians are still insecure about performing with electronics. Some players believe that
working with electronics is too complicated and time-consuming, and that there is also the risk of
the computer crashing during the concert. For those performers who do wish to integrate
electronic music into their concerts, however, it is important to understand which aspects of a
performance are influenced by their relationship with the electronics, and how they can deal with
this relationship in different situations.
In electronic music, ‘mixed works’ are pieces that combine the live performance of one or
more acoustic instruments with sounds created, processed or reproduced electronically. There are
two main types of mixed works: pieces with fixed electronic media, frequently called ‘pieces
with tape’, and pieces with live electronics. In a performance with fixed electronic media, the
performer is accompanied by a tape, a CD, or sound files played by a computer, containing the
pre-recorded electronic part created by the composer for that particular piece. In a performance
with live electronics, the electronic part is not fixed, which means that the sounds are
electronically created or manipulated in real time (i.e., in the moment of the performance) along
with the acoustic sounds. Currently, the performance of mixed works for acoustic instruments
and live electronics is mostly based on the use of computers and software (especially Max/MSP).
In this paper, the term ‘computer-based electronics’ will be used to refer to mixed works in
which the electronic part is controlled by a computer system. The performance can combine prerecorded sounds, the synthesis of new sounds, and the processing - in real time - of the sounds
produced by the acoustic instruments during the live performance. Live-electronic performance
is often called interactive performance, since there is normally a high degree of interaction
between performer and electronics. According to Todd Winkler:
Interactive music is defined here as a music composition or improvisation where software
interprets a live performance to affect music generated or modified by computers.
5
Usually this involves a performer playing an instrument while a computer creates music
that is in some way shaped by the performance. (Winkler, 1999, 4)
This paper will present issues related to the performance of recent pieces for percussion and
computer-based live electronics. Most of the issues to be considered are not unique to the
performance of percussion instruments. They might also arise in the performance of mixed
works for any instrument and electronics. This paper is therefore addressed to performers in
general, not only percussionists.
The structure of this paper is as follows. Chapter 2 contains a historical review of the
development of mixed works, with an emphasis on percussion and electronics; Chapter 3
presents some technological aspects related to this repertoire; and Chapter 4 addresses musical
issues related to the interaction between performer and electronics. These three chapters provide
a historical/theoretical context for the discussion of four recent mixed works for percussion and
electronics that follows in Chapter 5. The main focus of this discussion is on performance issues,
particularly on performance decisions that are affected by the presence of electronics/technology
in the four pieces that will be studied. The four works are:
1 – Wooden Stars (2006 - rev. 2008), Geof Holbrook (Canada, b. 1978)
2 – Anamorfoses (2007), Sérgio Freire (Brazil, b.1962)
3 – Traces IV (2006 - rev. 2008), Martin Matalon (Argentina/France, b.1958)
4 – Improvisation for hyper-kalimba (2008), Fernando Rocha (Brazil, b.1970)
In choosing this repertoire, I looked for recent pieces that could highlight current issues
relating to performance with electronics. I was also interested in working with composers with
different musical trainings and backgrounds, in order to further illustrate the variety of
possibilities inherent in this genre. The first three pieces demonstrate different uses of technology
and ways of interacting with electronics. While in these pieces I have been mainly involved as a
performer, the fourth piece, Improvisation for hyper-kalimba, has allowed me to work directly
with composition and technology, as well as performance. This has been valuable because these
three aspects are so interrelated in this repertoire.
Through the study of these pieces, important questions will be addressed:
(1) Which aspects of music technology (if any) should the performer be familiar with
when performing this repertoire? How can this knowledge produce a better performance?
6
(2) What is the interaction between performer and electronics in a mixed work? (a) How
can synchronization be achieved? (b) How are electronic and acoustic sounds mixed
together and diffused?
(3) How can performance decisions be affected by an electronic part in a piece?
(4) Are there differences between performing/practicing an acoustic work and
performing/practicing a mixed work?
Finally, Chapter 6 proposes possible answers to these questions, as well as final reflections
about the role of the performer in electronic music.
7
2. Mixed works
2.1.
Historical background
Technology has always played an important role in the history of music. The development of
the modern piano during the 19th century is a very good example (Pinch and Bijsterveld, 2003;
Belet, 2003; Winkler, 1998). The possibilities provided by the modern piano were fundamental
to the work of composers such as Liszt and Chopin and to the establishment of the aesthetics of
romantic music, strongly centered in the cult of the virtuoso. It is not surprising that in the 20th
century musical aesthetics were similarly influenced by the possibilities provided by technology.
The ability to convert sounds to electric signals (and vice-versa) was a great discovery from the
late 19th century, one which stimulated the invention of new instruments, such as the ‘Musical
Telegraph’ (1876), the ‘Dynamophone’ or ‘Telharmonium’ (1897), the ‘Theremin’ (1924) and
the ‘Ondes Martenot’ (1928) (Chadabe, 1997; Manning, 2004; Obsolete website, 2008).
At the same time that technology was opening new possibilities for music (including
recording techniques), some aesthetical issues were being addressed. The crisis of the tonal
system was a major event, which can be noticed in Stravinsky’s, Debussy’s, and Schoenberg’s
works. More radically, the Futurist Movement was pushing for the use of machines and noises in
the music making process. In 1913, Luigi Russolo published the manifesto “The Art of Noises”,
in which he stated:
Musical sound is too limited in its qualitative variety of tones. The most complex
orchestras boil down to four or five types of instrument, varying in timbre: instruments
played by bow or plucking, by blowing into metal or wood, and by percussion. And so
modern music goes round in this small circle, struggling in vain to create new ranges of
tones. This limited circle of pure sounds must be broken, and the infinite variety of
‘noise-sound’ conquered. (Russolo, 2004, 11)
The most important pioneer composer who advocated and explored the use of noise (i.e.,
non-pitched sounds) in the music-making process was Edgard Varèse. In 1916, he affirmed: “our
musical alphabet must be enriched… We also need new instruments very badly” (qtd. in
Manning, 2004, 7). Varèse had the opportunity to listen to the ‘Telharmonium’ in New York, but
he was disappointed since the instrument was very limited in terms of timbre and was used to
produce melodies in a musical language very tied to the past. In fact, Varèse was fighting “for
the liberation of sound and for [the] right to make music with any sound and all sounds” (Varèse,
8
2004, 19). Ionisation (1930-31), written for percussion instruments (‘noise makers’), including
sirens and anvil, uses very few pitched sounds; they appear in the last measures only to create
cluster effects. Varèse strongly believed in the importance of the use of electronics in the process
of the ‘liberation of sound’. During the 20s and 30s, he tried, unsuccessfully, to obtain funding to
create or access practical facilities for developing his research. The ‘opening up of music to all
sounds’ proposed by Varèse can be considered to be one of the most important steps in the
history of electronic music (Chadabe, 1996, 3).
The first successful attempts at creating a new musical aesthetic based on the use of
electronics date from the late 40s and early 50s. Supported by radio stations, the first studios
were established in Paris, with Pierre Schaeffer and Pierre Henry, and in Cologne, with Herbert
Eimert, Robert Beyer and Meyer-Eppler (soon joined by Stockhausen). The Paris group created
musique concrète, which explored the possibilities of the manipulation of real sounds by using
recording and playback techniques. On the other hand, the group in Cologne explored sounds
created electronically in a process called ‘sound synthesis’, based on the conversion of periodic
electric signals into sound. Work in laboratories, first with analog equipment (tapes, oscillators,
early analog synthesizers), and later with computers and digital technology allowed composers to
spend more time in the exploration of sounds with electronic support. With the assistance of
technicians, composers were able to work more deeply with properties of sound, especially
timbre, processing existing sounds and/or creating new ones. This, however, was a slow process
requiring machines that could not be moved to a performance space, and that, moreover, could
not operate in real time. Therefore, the results of a composer’s work were recorded on a
magnetic tape and could be diffused in a performance space, thus creating a new concert
situation, without the participation of performers. This new situation allowed composers to have
much more control over their final works, since there was no room for a performer’s deviations
(i.e., rubato, vibrato, or even possible mistakes). It also allowed for a more complex exploration
of timbre, in a way that the use of traditional instruments could not achieve. On the other hand,
these works became completely fixed (unlike all the music composed to date); this can be
considered as one aesthetical problem of tape music:
Perhaps this interpretive flexibility is one of the reasons for the longevity of the
classical/romantic repertoire. This repertoire has focused so much on performer
9
‘interpretation’ that it has managed to be reinterpreted with new significance through
over 250 years of cultural change. (Garnett, 2001, 27)
Moreover, “to some people, the lack of visual stimulus may pose a problem for presenting
tape music in a concert setting” (Winkler, 1998, 11). David Cope pointed out that “the obvious
loss to the audience of more or less theatrical or visual activity during performance of works on
tape has inspired three alternatives: (1) combination of live performer with tape; (2) live
electronic music (…); (3) tape used in conjunction with projections and/or theatrical events”
(Cope, 1998). Both alternatives (1) and (2), i.e., mixed works, combine the sound possibilities
offered by electronic means with the participation of a performer, providing flexibility and a
visual aspect.
2.2.
Mixed works for percussion and electronics: repertoire
Percussion and electronics have each had an important role in the development of
contemporary music in the 20th century. While these two fields have distinct characteristics, they
also share common aesthetic interests related to the exploration of musical timbre - including
noise - and to the development of a new musical language not necessarily centered in pitch. It
should not be surprising, therefore, that there are a number of pieces that combine percussion and
electronics. The following sections will present examples of mixed works for percussion and
electronics from different periods and ‘styles’ of electronic music.
2.2.1.
Mixed works with fixed electronic media
Works with tape produced in the early studios
After 1950, electronic music studios were created in many different places, such as Milan (at
RAI), New York (at Columbia University in partnership with Princeton), and Utrecht (at the
University of Utrecht). These studios did not rigidly follow the aesthetic of either Paris or
Cologne. In the beginning, most of the pieces produced in these studios were for tape alone, but
they also produced several pieces for tape and instruments.
One of the first pieces written for instruments and tape is Musica sur due Dimensioni, by
Italian composer Bruno Maderna. It is a serial work for flute, cymbal and tape, written in 1952.
The tape part, created with the assistance of Meyer-Eppler at the University of Bonn, contains
10
sounds produced by the melochord, an analog synthesizer (Neidhöfer, 2007). The cymbal part is
modest, but it is significant that percussion was included in this historic work.
Stockhausen, in Cologne, composed several pieces of electronic music with and without
instruments. One of his most influential works is Kontakte (1959-1960). The piece has two
versions: one for quadraphonic tape alone and one that includes piano and percussion. The name
of the piece refers to contacts “among electronic and instrumental sound groups, autonomous
formal structures (‘moments’), and the forms of spatial movement ” (Appleton and Perera, 1975,
113). One of the main aspects of the piece is the spatial projection of the sounds – spatialization through the careful use of sound diffusion by four speakers. The effect produced is that of sound
moving in space. Stockhausen also worked with the idea of creating a continuum between the
duration structure of events and their own timbre. By decreasing the speed of one event to a
subaudio frequency, structural components of this event “become events in their own right, the
‘atomic’ structure thus being revealed as a rhythmic force” (Manning, 2004, 60).
In Paris, Pierre Schaeffer and Pierre Henry were more interested in developing works
exclusively for tape. However, other composers associated with the group, such as Luc Ferrari
and François-Bernard Mâche, created mixed works for tape with instruments. Mâche composed
Temes Nevinbür (1973), for two pianos, two percussionists, and Marae for six amplified
percussionis and tape (1974). Luc Ferrari composed Cellule 75, Force du Rythme et Cadence
Forcée (1975), for piano, percussion and tape.
In New York, Mario Davidovsky, an Argentine composer and former Associate Director of
the Columbia-Princeton Electronic Music Center, composed Synchronism No. 1, for flute and
tape, in 1962. Since then, he has composed eleven other pieces called Synchronism. The two
most recent ones (No.11 and 12) were premiered in March 2007, during the annual conference of
the Society for Electro-Acoustic Music in the United States (SEAMUS). Synchronism No. 5 is
written for percussion quartet and tape. An interesting aspect of this series of works is how
Davidovsky solved problems of synchronization between musicians and tape. He combined short
sections that ask for a perfect synchronization with long sections where the tempo can be more
flexible. To assure that the performer will always able to catch the tape, he introduced fermatas
between some of the sections.
Another interesting early piece is Lejaren Hillers’s Machine Music (1964), for piano,
percussion and tape. Hiller’s compositions were based on automated procedures, controlled by
11
computer programs specially developed by him. Machine Music has eleven movements and
requires a precise synchronization between players and tape. The tape runs continuously, and, in
order to make sure that players will start every movement together with the tape, the composer
includes one extra bar with a click track (i.e., a steady beat) in the tape part, just before the
beginning of the movement. To ensure that the audience will not hear the click track, the sound
assistant should have headphones. Before the click starts the assistant switches the sound from
the speakers to his headphones (so that the click track is not broadcast to the speakers). He then
cues the performers according to the click track, and sends the sound back to the speakers.
Prison Song (1971), by Hans Werner Henze, is a piece for percussion and tape1 in which the
performer is instructed to create his own tape for the piece by recording his voice and
instruments. The piece has a theatrical component, since the performer assumes the role of a
prisoner. During the performance the percussionist speaks (in a sprechstimme) two verses from
the “Prison Diary ” of Ho Chi Minh. The tape is used to help build the drama of the situation: a
prisoner surrounded just by his voice and sounds reverberating in the space. Therefore, the
sounds of the tape should match the sounds produced during the performance.
The examples of Synchronism, Machine Music, and Prison Song bring up some of the main
recurrent questions of playing with tape: (1) how to provide some freedom for the performer; (2)
how to ensure that the synchronization between tape and performer will be perfect; and (3) how
to create a situation in which the player can blend his or her sound with the tape. Aspects of
interaction between electronics and performer will be discussed in Chapter 4 and will also be
important in the study of the four works (chapter 5).
The repertoire for percussion and fixed media is extensive and continues to grow. More
recent examples of pieces for percussion and fixed media include: Martin Wesley-Smith, For
Marimba and Tape (1983); alcides lanza, Interferences III (1983), Diastemas (2005); Javier
Alvarez, Temazcal (1984); Jean Piché, Steal the Thunder (1984); Hans Tutschku, The Metal
Voice (1992); Edmund Campion, Losing Touch (1993); Daniel Almada, Linde (1994); Bruce
Hamilton, Interzones (1996), and John Luther Adams, Mathematics of Resonant Bodies (2003).
1
The performance of Prison Song also requires the use, in real time, of a delay effect.
12
2.2.2.
Mixed works with live electronics
John Cage’s early experiments
John Cage was always interested in exploring noises and discovering new sounds.
Percussion was therefore a natural path for his musical explorations. Moreover, influenced by
Varèse and by the ideas of the Futurist Movement, he also advocated for the use of electronics,
as stated in his famous essay, “The Future of Music: Credo”:
The special property of electrical instruments will be to provide complete control of the
overtone structure of tones (as opposed to noises) and to make these tones available in
any frequency, amplitude, and duration. WHICH WILL MAKE AVAILABLE FOR
MUSICAL PURPOSES ANY AND ALL SOUNDS THAT CAN BE HEARD (…)
Percussion music is a contemporary transition from keyboard influenced music to the allsound music of the future. Any sound is acceptable to the composer of percussion music;
he explores the academically forbidden "nonmusical" field of sound insofar as is
manually possible. Methods of writing percussion music have as their goal the rhythmic
structure of a composition. (Cage, 1961, 4-5)
Cage was very interested in the use of electronics in concerts, i.e., live-electronic music. The
first piece in which Cage used electronic devices was Imaginary Landscape No. 1 (1939), for
muted piano, cymbal and two variable-speed turntables. It is important to note that electronic
sound, for Cage, was just one more option in his palette of sonic possibilities. His compositions
were based on a rhythmic structure that could employ any sound (or silence). He even included
random ‘found objects’ such as the recordings used in Imaginary Landscape, which were found
by chance in a radio studio. In Imaginary Landscape No.1 the turntables play
recordings of test tones (either constant frequencies or varying ones). The records can be
played at either of two speeds – 33 1/3 or 78 rpm – the speed changed by a clutch.
Rhythms are produced by lifting and lowering the record needle. The effect of the pitch
sliding when the turntable speed is changed is striking and eerie. (Pritchett, 1993, 20)
There are many examples of live-electronic pieces in Cage’s work, such as: the series of
Imaginary Landscape 1-5, for electronic devices (such as turntables and radios) with or without
instruments; Credo in Us (1942), for piano, 2 percussionists, radio and/or phonograph; Cartridge
Music (1960), for amplified objects; Variations V (1965), a multimedia performance; and Child
of Tree (1975), for amplified instruments made of plant materials. Cage’s works with live
electronics inspired a large number of musicians in America, such as David Tudor, David
Behrman, Gordon Mumma, Alvin Lucier, Pauline Oliveros, and the percussionist Max Neuhaus,
13
who produced electronic realizations of Brown’s Four Systems (1964), Sylvano Bussotti’s Coeur
pour batteur (1965), and Cage’s Fontana Mix (1965) (Manning, 2004, 167).
Works with live electronics using analog devices
In 1959, Mauricio Kagel composed Transición II for piano, percussion and 2 tape recorders.
One tape reproduces material recorded previously; the second one records parts of the
performance to be reproduced later in the work, using loops. With this procedure Kagel suggests
an idea of an “echo of events past” (Manning, 2004, 158).
Stockhausen also produced a number of pieces for live electronics. Mikrophonie I (1964) is a
piece for one only tam-tam, but it requires six players; two excite the instrument with different
mallets and objects, two pick up the vibrations using microphones, and two control filters and a
potentiometer. The actions of the six musicians can influence pitch, timbre, dynamics, and the
“spatial impression of the sound” (Chadabe, 1997, 85).
Solo for melody instrument and feedback (1966) is another example by Stockhausen. This
piece is for any melodic instrument and requires at least three assistants: one to record parts of
the solo performance on different channels; one to play back these recordings at particular
moments; and one to record the output of the tape (feedback loop). Recent realizations of the
piece have been made without requiring the three assistants. All the work of recording and
playback can be controlled automatically by a computer system. The performer, then, gains
‘digital autonomy’, assuming total responsibility for the performance and having more intimate
control of the work (Esler, 2006).
Computer-based live-electronic music
The beginning of computer music can be dated from 1957, when Max Mathews create
MUSIC, the first of a series of programs used for sound generation. However, at that time it was
impossible to perform with computers in real time. Computers were used to create tape parts.
With recent advances in technology, much more portable and powerful computers have made
possible the advent of real-time computer music systems, opening up many new possibilities for
live electronic music. Composer Joel Chadabe remembers: “In 1977, I was lucky enough to be
able to buy a small, portable computer music system. It was the first Synclavier, and it made
wide-ranging performance possible for me” (Chadabe and Williams, 1987, 100). Working with
percussionist Jan Williams, he composed Rhythms (1980) and Follow Me Softly (1984), both for
14
computer/synthesizer and percussion. One of his goals was to provide new approaches to
improvisation and to establish a real interaction between performer and machine.
An important centre for the development of computer-based live-electronic music is IRCAM
(Institut de Recherche et Coordination Acoustique/Musique), in Paris, where composers and
technicians work together in the development of computer systems for music applications. The
software Max, for example, was created at IRCAM in 1988. Since then, it has been used in many
pieces, including Six Japanese Gardens (1993), by Finnish composer Kaija Saariaho, for
percussion and electronics.
Six Japanese Gardens is an interactive piece in which the performer triggers sound files by
pressing pedals at precise moments indicated in the score. The Max patch required for the
performance is provided with the score. However, the patch was written for a Macintosh
computer with operational system OS 9. New Macintosh computers operate with OS X, and this
system is not able to run the original patch written for Six Japanese Gardens. Updating this patch
is not a difficult task, but it reminds us of one problem of electronic music that some researchers
have started to address: technological obsolescence.
Preservation of the original technology itself has already proven difficult if not
impossible, due to the lack of clear standardization of equipment that constitutes an
‘interactive system’. Musicians who wish to perform these works should be armed with
the best possible tools for realizations that are faithful to the composer’s intentions, while
using the technology at hand. (Wetzel, 2006)
Another interactive piece for percussion and electronics is Neptune (1991), by Philippe
Manoury. Neptune is scored for two vibraphones, marimba, and tam-tam. In Neptune, Manoury
used the technique of score following: i.e., the computer is able to listen to the performers and
follow them. This technique is based on pitch detection. In this particular piece, Manoury and his
assistant Cort Lippe had a problem: how to implement pitch detection when a tam-tam, with a
spectrum full of different frequencies, is playing with the other instruments. To address this, he
used two MIDI vibraphones, which can transmit digital information directly to the computer
system.
A final example is La Coupure (2000), by composer James Dillon. The piece is scored for
percussion and real-time audio and video technology (Schick, 2006, 135-139). As previously
stated, the use of computer systems opens new possibilities for composers to work even further
15
with sounds. As this piece demonstrates, a computer system can also control other elements,
such as video projection or lighting.
16
3. Aspects of technology in mixed works with computer-based
electronics
3.1.
Hardware
Performance of mixed works requires the use of different electronic devices. FIG. 1 is a
representation of a sound system for computer-based interactive music:
FIG. 1. Sound system for a computer-based interactive work
The sound of the instrument is captured by microphones and sent first to a mixing board and
then to an audio interface; here it is converted to digital audio and sent to a computer, where it
can be manipulated. The electronically processed sounds go back to the mixing board (through
the audio interface, again) and are sent to the speakers to finally reach the audience, together
with the live sound of the acoustic instrument. A MIDI interface can be used to transmit data to
the computer (for example, pressing a pedal, or playing a pad or a keyboard), which can be used
as a trigger or as a parameter for sound processing.
Electronic music creates a performance situation in which the sounds do not come directly
from their original source. Instrumental and electronic sounds are projected to the audience
through the speakers. One property of sound can be clearly affected in this process: direction.
Composers can manipulate this by placing sounds in different locations and moving them in the
space. Spatialization has become an important parameter in electronic music. In addition, the
timbres of acoustic instruments and the balance between instrumental and electronic parts are
affected by the amplification process and should be carefully adjusted to ensure a good
performance result.
17
3.2.
Software for interactive music
In 1987, at IRCAM (Institut de Recherche et Coordination Acoustique/Musique) in Paris,
Miller Puckette created the software Max (named after Max Mathews). The software was first
used for Jupiter, a piece for flute and electronics by Phillippe Manoury. Max continued to be
developed during the 90s, and it is now the most common software used by composers in works
for instruments and electronics. Open-source alternatives to Max are PD (‘Pure Data’) a piece of
software also developed by Miller Puckette, which presents many similar characteristics to Max
(Pure Data website, 2008); and SuperCollider, an “environment and programming language for
real time audio synthesis and algorithmic composition” (SuperCollider website, 2008).
Max/MSP contains a set of objects (with different functions) that one can link together to
create software applications, called Max patches.
Max/MSP is a graphical programming environment, which means you create your own
software using a visual toolkit of objects, and connect them together with patch cords.
The basic environment that includes MIDI, control, user interface, and timing objects is
called Max. Built on top of Max are hundreds of objects (…) MSP (is) a set of audio
processing objects that do everything from interactive filter design to hard disk recording.
(Cycling’74 website, 2008)
By using Max/MSP, composers can create Max patches to execute the electronic part of their
works. These Max patches can digitally process sound, play sound files triggered by the
performers, and also create sounds electronically using digital synthesis, all in real time.
FIG. 2 shows a very simple Max patch. In Example 1, the sound is converted from analog to
digital by the ‘adc’ object. This object is connected to two ‘gain’ sliders, which amplify the
signal. Finally, the sound is reconverted to analog by the ‘dac’ object and sent to two speakers. In
Example 2, a ‘del˜’ (delay) object is introduced and connected to one of the sliders. An extra
slider, located at the upper right, controls the delay time. As a result of this delay effect, the
sound will come later in one of the speakers, creating an echo effect.
18
FIG. 2. Simple Max patch – Examples 1 and 2
During a performance using Max/MSP, a patch may need to be operated, which means that
someone should control it by providing data when necessary. The data can be used as parameters
for sound processing or synthesis, or to assure perfect synchronization between performer and
computer. In fact, synchronization is one of the major issues concerning mixed works, and how
the computer can be programmed to ‘listen’ and ‘follow’ the performer is an important question
that composers and music technicians have to face all the time. Different ways of making the
computer follow the performer have been used in different pieces. By pressing a pedal, for
example, the performer can trigger the electronic events of the piece at the right moment. Ways
to implement synchronization in interactive works will be discussed in the next chapter.
19
4. Musical issues in the interaction between performer and
electronics
Interactivity is one of the major issues in the current discussion of electronic music. At the
center of this discussion is the relationship between human and machine that is present in mixed
works. This relation can be implemented in different ways, with different levels of interactivity.
This chapter will present some of the forms that interactivity has taken in mixed works, pointing
out some positive and negative aspects. First, however, there will be a brief discussion about the
concept of interactive music systems. One should be careful using the term interactivity since
different authors have given it different meanings.
4.1.
Interactive systems
Interaction is a concept that can be compared to a form of conversation, i.e., a relation
between two people in which each one talks and listens, acting and reacting to each other. In fact,
the same happens in any chamber music performance, with musicians playing, listening and
responding to each other (Winkler, 1998, 4). Interactive computer music introduced the idea of
interaction into the musical relationship between human and machine. Following the model of a
human dialogue, one can assume that “musical interaction requires musical understanding
involving a huge number of shared assumptions and conventions based on years of collective
experience” (Miranda and Wanderley, 2006, 224). By this definition, a machine should be able
to understand the actions of a performer and react based on that understanding, as well as on its
own ‘musical knowledge’. However, many so-called interactive systems are just reactive
systems, since there is no mutual influence (Bongers, 2000). A ‘de facto’ interactive system
requires some kind of artificial intelligence, which provides a sense of musical understanding to
the machine.
The term ‘interactivity’ in music, however, has been used in a broader sense to refer to
electronic music systems – usually computer-based systems - that can be controlled in real time
(Garnett, 2001; Belet, 2003; Winkler, 1998). Different levels of interaction can be found in
different systems. Garnett poses a number of questions regarding interaction:
Does the act of diffusing a tape or a CD in concert make a piece into interactive computer
music? Is a piece for tape and live performer necessarily interactive? Is there a level or
kind of interaction that leads to a qualitatively different musical experience? (Garnett,
2001)
20
Garnett affirms that an interactive piece should have a strong performance component.
Therefore, the answer for his first and second question should be ‘NO’. But the answers to these
questions cannot be taken in an absolute way, especially if we assume the broader concept of
interactivity. The art of diffusing tape music can be considered interactive (Stroppa, 1999), since
it implies some control over the sound system, or at least over certain parameters, like starting,
adjusting volume, and, sometimes, controlling spatialization. In the same way, pieces for tape
and live performer can also be considered to be interactive (Menezes, 2002).
The last question raises even more controversial issues. Does interaction create a
‘qualitatively different musical experience’? It is true that interactivity has been considered very
appealing – a ‘hot concept’ to Saltz (1997), or ‘sexy’ to Machover (2002). The human-machine
relation seems to create some fascination, some kind of fetish, especially when there is artificial
intelligence involved, i.e., when there is a high level of interaction. George Lewis’ ‘Voyager’
system is an example.
All that should matter to us are the visual appearance and the acoustic properties of
Lewis’s performance. But the reality does matter; indeed, the quality of the music plays
only a minor role in the fascination this work holds. (…) The interest here is in hearing
the system and the live performer adapt to each other’s performances, in observing the
development of a unique relationship between system and human. In other words, what is
most interesting is precisely the feat itself, the action, the event (Saltz, 1997, 124).
A general pro-interactive argument is that it provides more freedom to the performer,
especially with tempo, favoring interpretation. Alvarez provides examples of pieces for
instruments and tape that also preserve a relatively high level of flexibility in the performance
(Alvarez, 1989). Moreover, according to Stroppa, in the performance of a piece like
Stockhausen’s Kontakte, the tape sounds and instruments are so well structured that it is
irrelevant for the audience whether the piece is interactive or not (Stroppa, 1999).
In fact, how the performer relates to the electronics should be a result of a composition’s
structure as created by the composer (Menezes, 2002). Different musical situations often can
require different solutions. In the next section I will discuss different ways in which instrumental
and electronic parts can be synchronized in a mixed work. Two assumptions will be made: (1)
the use of a broad concept of interactivity, implying a large range of relationships and forms of
control (including fixed media); (2) the assumption that no particular form of interactivity is
necessarily better or worse than other - it depends on the musical context.
21
4.2.
Forms of implementing interactive systems and performance Issues
In an interactive music system, the sound produced by the performer (and any other input
information) is sent to a computer running interactive software. The acoustic information (pitch,
duration, dynamics, etc) is analyzed, interpreted, and used to generate data. This data influences
the composition, which will finally be played by the computer. The computer output can also
include visual feedback for the performer and can control multimedia effects, such as a lighting
system, a video, or a slide projection. FIG. 3 represents this system.
FIG. 3. A representation of an interactive system (based on Winkler, 1998, 6-7)
In a work with computer-based live electronics, both human and computer have prescribed
roles. The musician usually has a musical score to follow; the computer can have different tasks
according to the piece: (1) it can play a prerecorded sound or sequence of sounds at a
predetermined moment, according to a ‘score’; (2) it can record sounds coming from the
performer, and store these in its memory to be used later in the piece; (3) it can process in real
time the acoustic sounds, using effects such as spatialization, filters, delays, ring-modulators, and
so on; (4) it can also make a synthesis of new sounds or sequences of sounds, based on
algorithms that use data generated by the performer. One of the main challenges of the system is
to guarantee that natural deviations in the performance (rubato, vibrato, or even possible
mistakes) will be understood by the computer, just as another musician can understand. The
computer, therefore, has to be able to adjust its time-clock so they can stay together throughout
the performance, as in chamber music: “Live electroacoustic performance is a kind of chamber
music, in which some ensemble members happen to be invisible” (McNutt, 2003, 299).
Forms of implementing this interaction, i.e., adjusting the time-clock of the computer to the
timing of the performer, will be discussed later. First, some considerations about the relation
between performer and electronics in works with fixed media will be presented.
22
4.2.1.
Performing with fixed media
The only ‘interactive’ control in a piece with tape is the button ‘play’. As soon as the piece
starts, the performer has no more control. Therefore, this kind of work is normally not considered
interactive, since the level of interaction is too low. Nonetheless, there are still important issues
which must be addressed concerning the relation between performer and electronics in this
repertoire. As stated in Chapter 2, some of the main recurrent questions of playing with fixed
media are: (1) how to provide some freedom for the performer; (2) how to assure that the
synchronization between tape and performer will be perfect; and (3) how to create a situation in
which the player can blend his sound with the tape. These three questions will also be addressed
now using three pieces as examples: Temazcal (1984), by Javier Alvarez; Linde (1994), by
Daniel Almada; and Losing Touch (1993), by Edmund Campion.
Temazcal is a piece for maracas and fixed electronic media. It is usually played with a CD,
but there is also a second version for quadraphonic tape. Although the electronic part is
completely fixed, the performer has a great amount of freedom, since the instrumental part is
mostly an improvisation based on short rhythmic patterns that are traditional in Latin-American
maraca technique. The tape part is a studio manipulation of sounds of maracas and other folk
instruments. The performer plays his part in response to the motions suggested by the pulses on
the tape. The score shows him a main pattern to follow, and he can modify it - or even replace it
- by adding new short rhythmic cells. This process creates some intricate cross-rhythms between
tape and performer. According to Alvarez, this is attractive for him in two senses:
Firstly, in that the performer must, by the very nature of the work, engage in active
listening, and almost dance in order to pull the piece together. Secondly, this simple
approach breaks away from the concept of the tape as a straight-jacket: in Temazcal it is
possible to interpret freely the suggested material, but, even under these apparently loose
conditions, synchronisation points invariably remain extremely accurate while the
response to the material on tape remains seemingly personal. (Alvarez, 1989, 218)
The second example is Linde, written for vibraphone and tape. Composer Daniel Almada is
especially interested in micro-tonality. To create the tape part for this piece he used real
vibraphone sounds, transforming them and adding glissandi and microtones to the instrument.
Linde is the game of combinations, contrasts and dialogues which may arise between the
vibraphone and magnetic sound tape. Separation or combination? Who does what? Who
is who? (Séjourné, s.d.)
23
To stress the game proposed by the composer, the performer has to search for a way to
produce a timbre that matches the vibraphone on the tape. The percussionist can experiment with
the choice of mallets, type of stroke, and position of the stroke on the bar. Since different halls
have different acoustics, the performer has to check his choices in the dress rehearsal. Balance is
also important to create a dialogue between the two parts in which the audience should not
perfectly distinguish ‘who does what’.
Synchronization is another important aspect of the piece. Absolute precision is required in
moments of polyrhythm and in passages in which percussionist and tape play interlocking
rhythms (FIG. 4). The piece is played without a click track, but to help the performer acquire the
precision required in the work, an extra track is included on the tape (in fact, a CD) for rehearsal
purposes, with a click track added to the electronic part.
FIG. 4. Daniel Almada, Linde (top staff = vibraphone part, bottom staff = electronic part)
Finally, in Losing Touch, by Edmund Campion, rhythmic precision is so crucial that the
performer has to use a click track during the performance. According to the composer the use of
a click track is
to ensure synchronization and aid the illusion of integration. I found that keeping the
density of events high, self-similar, and constantly moving helped form a unified
perceptual scene. Re-synthesis of spectral models based on the solo instrument was
useful for sonic coherence. (Campion in Makan, 2004,17)
Some critics can say that pieces with tape contain an inherent problem: they leave no
flexibility to the performer. However, this is not necessarily true, as one can see in Temazcal.
24
Moreover, if strict synchronization is required for some musical reason, the performer can use a
click track for rehearsals or even for performances. Although one could say that this represents a
‘straight jacket’, it is actually just a trade-off that is common in music, and with which
composers have to deal very often. In this case, Campion compromises the performer’s
flexibility in order to create a very complex and interesting sonic space.
These three examples show different strategies that composers (and performers) can use to
combine instrumental and electronic sounds in pieces with fixed media. It is also possible to have
more interactivity in this repertoire, i.e., to make the electronic part ‘follow’ the performer to
some extent. For this, the fixed media should be divided into different tracks or audio files that
can be triggered by the performer (or someone else) at the right moment in the piece. This
procedure gives more interpretive freedom to the performer, allowing him or her to change the
tempo of some musical phrases and still be synchronized with the fixed media. Six Japanese
Gardens, by Kaija Saariaho, is an example of an interactive piece based on the use of fixed
media: pre-recorded audio files are triggered when the performer presses a MIDI pedal
connected to the computer. Triggering audio files usually involves the use of computer
interactive systems, which are also part of performance with live electronics.
4.2.2.
Performing with live electronics
Performance with live electronics implies a higher level of interaction between performer and
electronics. The system should be able to react to the sounds and/or gestures produced by the
performer. These are captured by the computer as ‘human inputs’, which can be processed by the
computer in different ways.
Human inputs
One of the most common ways that live-electronic music is performed is with the assistance
of an extra person controlling the software. The technical assistant, often the composer, operates
the system by providing the necessary cues at the right times. The computer is, in this case, just
another instrument, ‘played’ by another human. It follows the performer indirectly, via this extra
performer introduced in the system. Although the use of a technical assistant is a very safe way
to operate computer systems, it has also its negatives points. The performer becomes dependent
on another person to play the piece. A work that should be a solo becomes a duo, and this makes
25
it harder for the performer to program the piece in his or her solo concerts. Also, when the
performer controls the electronics, he creates a more intimate relation to the piece, which can
positively affect his performance. Moreover, visual cues may be necessary between performer
and technical assistant. This can be a problem in some concert situations, due to lighting issues
or to the distance from the computer to the stage. Finally, if a technical assistant is involved, the
performance must be thought of as a duo, and there must be enough rehearsals with the
musicians.
Live-electronic performance can happen without requiring an extra person to control the
synchronization between instrumental and electronic parts. The performer can transmit data to
the system in different ways:
A - MIDI devices: foot pedal / electronic pads / buttons
Foot pedals, electronic pads, or any ‘trigger button’ are a very easy, simple and reliable way
to trigger events. With the use of triggers the composer can create sections in the piece that are
automated. At some structural points the computer can wait for performer before continuing to
the next section. Pedals also allow very precise attacks or events between performer and
electronics. In Points in the Sky (1993), written by American composer Glenn Hackbarth,
percussionist, clarinet and electronics have many precise attacks together during the piece. Some
of them follow fermatas or silent moments. The only way to implement this is by using a trigger,
like a pedal. The form of the device should be chosen depending on the musical situation.
Singers, for example, can press a button, which can be hidden in their hands. Any device that can
transmit MIDI information can be used. Kimura, however, believes that these ‘magic buttons’
visually distract the audience, bringing “the focus away from the music” (Kimura, 2003, 289).
Moreover, it can affect negatively the performance. Violinists, for example, are not used to
pressing pedals when playing, and this new gesture can interfere in the performance unless they
have enough time to learn the ‘new technique’. This is not a big issue for percussionists, since
they are used to adding new ‘instruments’ to their set-ups, and they are also used to playing with
pedals (drum-set, vibraphone, pedal bass drum). One last problem with the use of pedals is
accuracy in fast passages; it can be impossible to articulate fast rhythms using MIDI pedals.
Thus, the synchronization can be affected.
26
Although it has some drawbacks, the use of pedals is very common, and many performers
enjoy the control over the timing of the computer that the pedal can provide (McNutt, 2003,
300).
B – General sensors
Sensors can be defined as devices that convert “a form of energy into an electrical signal”
(SensorWiki website, 2008). With the use of sensors and of a sensor interface – which can
convert electrical signals into digital data – a computer interactive system can thus receive data
captured from gestures made by the performer. Gestures like moving the instrument, moving the
position of the hands on the instrument, or pressing a certain surface on the instrument can be
detected and measured, generating digital data to be used as human inputs in an interactive
system. Using gestures that are natural to the performance of the instrument is preferable to
adding new ones, since performing an instrument is a complex task, including the subtle control
of a number of small movements. If new gestures are proposed to the performer, they should not
interfere too much with the playing technique. “The more unfamiliar the technology is to the
musician, the more disruptive – even for a brilliant and skilled performer” (McNutt, 2003, 299).
Thus, the choice of sensors and the way they will be placed should be considered carefully.
C - Audio Signal, captured by microphone
The sounds produced by the performer are captured by a microphone and analyzed by the
software interface. Computer systems can be programmed to recognize in real time the pitch of
notes being played (pitch recognition); to detect the tempo of the performance (beat detection);
and to follow the dynamics - including identifying accents - by measuring the amplitude level of
the performance (amplitude follower). By comparing the parameters extracted from the analysis
of instrumental sound (pitch, timing, and/or dynamics) to a pre-determined score, the computer
can identify where the musician is, and, therefore, it can follow the performer. This method of
synchronization is called score following. The main advantage of this system is that the
performer can eventually be free in the interpretation. He or she does not need to make any extra
effort (use pedals, buttons, sensors, visual cues…) – the performer just plays, and the computer is
able to follow by comparing the real performance with the score. An intermediary system is
called score orientation. Instead of following the whole performance, the computer looks for
structural points.
27
Live-electronic performance: forms of interaction
A – Score following
Score following is a
process of tracking live players as they play through a pre-determined score, usually for
the purpose of providing an automatic computer accompaniment to a live player. The
procedure for score following is rather simple: enter a score into a computer in some
form, and then play the score as real-time input to the computer (either via a MIDI
interface or a microphone and acoustic analysis), comparing the computer's stored score
to the musician's playing on a note by note basis. (Puckette and Lippe, 1992)
Score following has been used in many pieces by Philippe Manoury and Cort Lippe, among
others. It “allows more detailed control moment to moment. (…) [and] can be used to create an
effective and highly responsive type of interaction” (McNutt, 2003, 300-301). Most of the
existing score following systems are based on pitch detection. However, this method can present
serious problems because of one basic issue: both player and software can make mistakes. In a
score following system, the player is not a slave to the tempo, but he can become a slave to
perfection, since deviations in performance, such as vibrato, or some imperfection in a very fast
passage, can cause problems in the system. The system can also make mistakes due to acoustic
properties of the instrument, of the hall, or because of unexpected noises. Moreover, in its
current state of development, score following works only for detecting melodies, not for
accurately following polyphonic structures. Because of these problems, in pieces with score
following someone must be at the computer to monitor and correct any mistake.
B - Score orientation
Score orientation is a more flexible type of score following. Instead of following the whole
performance, the computer adjusts its time-clock to specific moments of the piece. This is what
happens in works that use trigger pedals: at a specific moment, the performer triggers an event in
the electronic part, ensuring that both parts will be perfect synchronized. Score orientation can
also be based on some acoustic parameter of the performance, using pitch detection, amplitude
follower, or beat extraction. Kimura developed her ‘flexible time window’, a simplified form of
score orientation. She divides the piece into sections with flexible predetermined time (such as
time brackets). The system waits for trigger events before moving to the next section. In general,
she uses a pitch that should be detected by the computer as a trigger. But, if it fails, the system
28
moves to the next section automatically after some time. Clearly, Kimura makes some
compromise in the idea of freedom, since she knows that the pitch detector is not perfect, and it
might fail during a performance. But with this trade-off, she can perform with a very ‘robust’ and
safe system, which does not require technical assistance or use of foot pedals (Kimura, 2003).
C – Systems open to indeterminacy and improvisation
A more open approach to the relation between performer and computer is found in systems
that are based on indeterminacy or modeled to create an interactive environment for
improvisation. George Lewis’ Voyager, already cited here, is a system created to improvise with
the performer. The computer plays sequences of sounds, generated by predetermined algorithms,
with some random component. The software interprets the musical material played by the
performer and generates data that is used in the predetermined algorithms. The result is a duo
improvisation between human and computer. This approach gives total freedom to the performer,
although it is less useful for musical structures requiring strict coordination between the system
and the performer.
29
5. Looking at the pieces2
5.1.
Geof Holbrook3: Wooden Stars (2006 - rev. 2008)
5.1.1.
General information
Wooden Stars4 was the result of a collaboration with composer Geof Holbrook; he was asked
to write a piece for solo percussion and electronics in which the performer would control all the
electronics. Holbrook decided to write a piece using fixed audio files, which are triggered by the
performer using a MIDI pedal. The audio files contain synthesized sounds created with Apple’s
Logic Pro software. Most of these sounds have a wooden characteristic, somehow similar to the
sound of the wood instruments used by the performer. The percussion set-up used in the piece
includes vibraphone, one low tom-tom, one splash cymbal, a B-flat crotale, three cowbells, four
woodblocks, one cajon (or a wooden box), and a long bamboo guiro, specifically built for the
piece (FIG. 5).
Wooden Stars explores the effects of acceleration and deceleration. According to the
composer:
Wooden Stars is a short study for solo percussion and computer that explores ideas from
an earlier work within an electronic medium. In Smaller Knives (2004) for mixed quintet,
the ensemble carries out an "infinite deceleration", in which the act of slowing down is
treated as equivalent to "zooming in". In Wooden Stars, the zoom lens is erratic, with the
music speeding up and slowing down and therefore alternately revealing and obscuring
detail. This piece also adds the complication of independent (and changing) tempos,
creating a free rhythmic counterpoint between the percussion and electronic parts. (qtd. in
Réseaux website, 2008)
2
The four pieces referred to in this chapter were performed as part of a lecture-recital given on February
8th, 2008, in Tanna Schulich Hall, McGill University, Montreal.
3
Geof Holbrook (b.1978, Canada) has had works performed in Canada and in Europe, including
performances by the Nouvel Ensemble Moderne, Esprit Orchestra, the Bozzini Quartet, Ensemble of the
National Arts Centre Orchestra, and the Ensemble Orchestral Contemporain in Lyon. He has been
awarded five times in the SOCAN Competition for Young Composers and was recently awarded a Prix
Opus for "Création de l'Année". He was chosen to participate in the NEM International Forum for Young
Composers, held in Amsterdam in 2006. He has participated in composition courses at le Domaine
Forget, Royaumont, the National Arts Centre, and at IRCAM in Paris, in addition to his graduate studies
at McGill University, where his mentors included Denys Bouliane, John Rea and Sean Ferguson. He is
currently pursuing doctoral studies in composition at Columbia University in New York. (Geof Holbrook
website, 2008)
4
Wooden Stars was awarded the 1st prize of the Hugh Le Caine Prize in the 2007 SOCAN Foundation
Awards for Young Composers.
30
The use of the long bamboo guiro built for the piece helps to materialize and represent the
effect proposed by Holbrook. When the speed of playing (i.e., scraping) on the guiro is very
slow, it is possible to hear each of the strokes that together make the characteristic rasping sound
of the instrument. This is exactly what the composer calls ‘zooming in’, thus ‘revealing details’.
On the other hand, when the speed of playing is fast, the strokes are not heard individually
anymore; together they create the rasping sound of the guiro. This is a ‘zooming out’ effect, thus
‘obscuring details’.
FIG. 5. Set-up of Wooden Stars, including some performance notes.
5.1.2.
The role of technology
Although it is a piece with fixed media, Wooden Stars is highly interactive: the electronics
are controlled by a Max patch (FIG. 6) that allows the performer to trigger, stop and sustain
several audio files by pressing a pedal. The use of the pedal allows some freedom for the
performer, who can be in charge of the tempi. Each moment that the pedal should be pressed is
indicated in the performance score (as ‘trig.’), and each one receives a number (and sometimes a
letter). There are 51 trigger events (in the revised score) during the piece, and some of them have
different functions. Trigger number one, for example, starts an audio file, while trigger number
two stops this audio file.
31
FIG. 6. Image of the Max patch running during a rehearsal of Wooden Stars
The Max patch created for Wooden Stars also provides other means of synchronization. It
has a timer that helps the performer to coordinate precisely with events whose durations are
indicated in the score in seconds. It also displays the image of the waves of the audio file being
played. This is very useful in the section corresponding to ‘trigger event number 32’ in the score
(FIG. 7). Here the performer has to play single notes and chords on the vibraphone,
corresponding precisely to the fixed media, which has many accelerandos and ritardandos.
Watching the cursor progressing through the waves (FIG. 8) enables the performer to anticipate
the exact moment to play. The performer must play each one of the chords and single notes in
this section at the moment that the cursor reaches the end of the window.
FIG. 7. Wooden Stars: Bars 64 to 67 (cue 32)
32
FIG. 8. Three images of the cursor progressing through the waves of an audio file.
5.1.3.
Performance issues
Synchronization is particularly important in this piece, since both the instrumental part and
the electronic part are based on changes of speed. The performer synchronizes his or her
performance with the computer in different ways: (1) by following audio files; (2) by watching a
timer; (3) by looking at the waves of audio files shown on the screen of the computer; (4) by
using an electronic trigger pedal. In (1), (2), and (3) the performer has to follow the computer,
but in (4) the computer follows the performer. This relation is somehow similar to a chamber
music performance, where players have to follow each other. The performance of Wooden Stars
can thus be compared to a duo between performer and computer.
In some sections of the piece, the composer uses perfect synchronization to create an illusion
of live processing, as one can see in bars 37 to 45 (FIG. 9). Here, an audio file is played while
the pedal is being pressed. When the performer releases the pedal, the sound stops. The
performer scrapes the guiro for the same duration that he or she presses the pedal, and the result
sounds like a ‘modified’ guiro, which is actually the guiro with a fixed electronic sound added.
33
FIG. 9. Wooden Stars: Bars 37 to 45
The use of pedals in mixed works can be considered a new performance gesture that has to
be mastered by the performer. Although for a percussionist this is not necessarily a new gesture,
its use in this piece can still present a challenge, since it happens so many times. One particular
section should be mentioned: section 32, when the performer has to control both the electronic
pedal and the vibraphone pedal at the same time. This, while standing, is not very comfortable.
In the revised edition, the composer avoided the use of the trigger pedal during the vibraphone
section of the piece. For this reason, instead of short audio files triggered by the performer, the
electronic part of this section now plays one long audio file, and the performer has to listen (and
watch the laptop screen) to be precise. In this new edition, therefore, another new ‘gesture’
becomes important: looking at a computer screen while playing.
Since the performance of Wooden Stars is somehow similar to a chamber music
performance, learning to play the instrumental part of this piece without rehearsing with the
electronics would be the same as learning part of a duo without ever rehearsing with the other
player. The performer should practice his or her part with the electronics. For this, he or she
needs to be able to set up a sound system with a laptop running Max/MSP, as well as a MIDI
interface connecting the pedal. These tasks can be very simple if, as is the case with Wooden
Stars, the patch is easy to use and the score includes enough instructions.
34
5.2.
5.2.1.
Sérgio Freire5: Anamorfoses (2007)
General information
Anamorfoses is a work in three sections for vibraphone, tuned gongs, and live electronics.
The term ‘anamorfose’ (‘anamorphose’) refers to distortions in an image produced by curved
mirrors. It was used by Pierre Schaeffer to describe some characteristics of musical listening.
Schaeffer realized that manipulating certain characteristics of a recorded sound, i.e., playing it
backwards or removing the attack, could create a very different perception of that sound.
According to the composer, the initial idea for this piece was to explore some of the most
perceptible acoustic characteristics of the vibraphone (clear attack and pitch, long resonance with
relatively simple harmonic spectra), in a wider timbral context (Freire, 2008). Therefore,
different electro-acoustic processes are used to manipulate and expand the sound palette of the
instrument.
5.2.2.
The role of technology
In this work, technology defines the musical form, since a different effect is used in each one
of the three sections of the piece:
- Section 1 uses artificial resonances: durations of some notes are extended;
- Section 2 uses frequency modulation: the sound of the vibraphone is transformed by the
use of an effect called ‘frequency modulation’;
- Section 3 uses loops: phrases played on the vibraphone are recorded and looped with a
random aspect that will be explained later.
The electronic part of the piece is performed by a Max patch created by the composer. All the
electronic sounds in the piece are derived from real-time processing of the vibraphone. In fact,
the use of electronics creates the impression of an augmented vibraphone, since the sound
5
Sérgio Freire (b. 1962, Brazil) is professor of composition and music technology at Minas Gerais
Federal University (UFMG), in Belo Horizonte, Brazil. Freire studied in Brazil, Holland, and
Switzerland, with Guerra Peixe, Eduardo Bértola, Thomas Kessler, and Sílvio Ferraz. He holds a Ph.D. in
Communication and Semiotics from PUC/SP. His artistic and research activities are focused on the use of
new technology in the concert hall, especially concerning the important role of speakers and new
electronic musical instruments. His works have been performed in Brazil, Chile, German, Holland,
Switzerland, and Canada. (Sérgio Freire website, 2008)
35
possibilities of the vibraphone are extended. The performer controls the sound processing by
using a pedal. The Max patch also detects pitches and attacks, and this information is used in the
sound processing, as will be explained later. Sound diffusion also helps to create the impression
of an augmented vibraphone in this piece. The speakers are placed on stage very close to the
vibraphone, creating the illusion that all the sound is really coming from the instrument.
5.2.3.
Performance issues
Section 1:
This section is based on the use of artificial resonances of notes and chords played on the
vibraphone. When the electronic pedal is pressed (after the attack of a note or chord) the patch
records a short ‘grain’ of audio and plays it continuously, creating an artificial resonance that is
added to the acoustic sound. The system plays three ‘artificial’ resonances simultaneously. The
fourth resonance recorded replaces the first, so that there is a continuous, but ever-changing, set
of three resonances being played. The artificial resonances should blend with the acoustic
resonances in order to create the illusion for the audience that they are all part of the natural
resonance of the instrument. The placement of the speakers close to the instrument helps to
enhance this illusion.
Dynamic level is an important issue in controlling the resonances. If the volume of the sound
recorded in the fourth resonance is noticeably different than that of the first one, there will be a
sudden change of dynamics. Since the score asks for smooth crescendos and decrescendos, the
performer must keep track of dynamics carefully in order to avoid such a change, thus making
the resonance blend with the overall texture. The performer controls the dynamic of the artificial
resonance in two ways: controlling the volume of the note played and choosing the moment to
press the electronic pedal and record the resonance. If the pedal is pressed just after the attack,
the patch records a louder resonance, thus the artificial resonances will sound loud. If the
performer waits longer to press the pedal after attacking the note, the resonances will naturally
fade out, and the grain recorded will produce a soft artificial resonance. In the score (FIG. 10),
the composer indicates each note or chord after which the pedal should be depressed, without
specifying the exact moment. The performer can therefore take advantage of this freedom to
have subtle control of the dynamics of the resonance.
36
FIG. 10. Anamorfoses: beginning of 1st Section. V indicates ‘pressing the electronic pedal’
FIG. 11 shows a part of the first section that includes rolled notes. Interesting variations in
the sound of the artificial resonances can be obtained in this section by exploring different
tremolo speeds and by using different mallets. The performer can thus create more diversity in
the sound texture produced by the set of three artificial resonances.
FIG. 11. Anamorfoses: Section 1 – part with rolls
Section 2:
This section is based on a sound synthesis technique called frequency modulation (FM).
Freire uses this technique to change the sound of the vibraphone and make it closer to the sound
of the gongs called for in the piece. When the pedal is pressed, the frequency of the first note
played is detected and used to modulate (FM) the following notes. Therefore, the patch must be
able to recognize pitch. The patch also has to detect the moment of attack since there is a
filtering process used in the modulation that is triggered by the attack of the note that is to be
modulated. An amplitude follower is used to detect the attacks. The patch provides visual
feedback to the performer when an attack is detected (a yellow circle blinks at this moment) and
37
also indicates the pitch detected (FIG. 12). Playing loud, slow, and detached makes the patch
work more effectively: both pitches and attacks are more accurately recognized. Musically, this
playing style gives a ‘solemn’ character to the movement, which is reinforced by the bell/gonglike timbre produced by the modulation of the vibraphone sound.
FIG. 12. Anamorfoses: patch including attack and pitch detection
Section 3:
In Section 3, the pedal starts and ends a recording. The patch recognizes the attacks or
accents played during the recording by using an amplitude follower. It then divides the phrase
into small parts, according to the accented groupings. These parts are shuffled and looped
randomly. Two consecutive loops are played simultaneously. When the third one is recorded, it
replaces the first. Although the notation does not imply absolute synchronization between the
loops, there are some points of coordination indicated in the score. The performer must listen to
what is being played by the computer and play the next loop accordingly.
FIG. 13 shows a phrase of the piece that is looped. In this phrase the patch should detect
three attacks (accents), so the phrase should be split into three parts. These three parts are played
continuously by the computer in a random order, creating a random loop. Fig. 14 shows one
possible result of this process.
38
The performer should be very careful when playing the accents, to make sure that they will
be correctly detected by the patch. Over the loops, the performer improvises phrases with
vibraphone and gongs, creating a polyphonic texture with three voices: two looped (echoes from
the past) and one improvised (the present time).
FIG. 13. Anamorfoses: Section 3 – phrase 8
FIG. 14. Anamorfoses: Section 3 – one possible result of a ‘random loop’ of phrase 8
Some phrases in this section include notes played with a bow. However, bowed notes do not
have a clear attack, and therefore cannot be detected by the patch. Possible solutions for this
problem include adding an extra attack with a mallet (one octave down), or adding another pedal
that the performer can use to indicate the beginning of each bowed note. The use of two pedals
on an instrument that is played standing, and which itself has a pedal, is not very comfortable.
Another strategy (used in the concert) is to have an assistant to press a key on the computer,
signaling the beginning of the bowed note.
Final comments:
Collaboration with the composer was very important in the process of creating this piece.
The composer adjusted the patch according to performance decisions and to acoustic
characteristics of the instrument used. Moreover, he included in the patch clear instructions and
simple means for adjusting the amplitude follower and the indexes of modulation used in the FM
effect. These adjustments are important since both amplitude follower and FM effect can be
affected by the acoustic of the hall, the sound of instruments used in the performance, and the
amplification system (especially type and placement of microphones). Some compositional
decisions were also clearly influenced by the relation between performer and equipment. For
39
example, initially all the loops in Section 3 were perfectly synchronized with one another. As
this proved to be almost impossible to achieve in performance, the composer rewrote the phrases,
so that the loops were not required to coincide perfectly.
In Anamorfoses the role of the performer is crucial to make the electronics work properly.
Dynamics, timing, and articulation should be carefully controlled by the performer in order to
obtain subtle control of the electronic sounds. By adding new performance gestures (an extra
pedal to the vibraphone) and sound possibilities, the composer is, in fact, introducing an
augmented instrument, one which needs to be mastered by the performer. Practicing time with
the equipment is essential, and the ability to set up the system is, again, necessary.
5.3.
5.3.1.
Martin Matalon6: Traces IV (2006 - rev. 2008)
General information
Traces IV is part of a series of pieces that composer Martin Matalon has written for solo
instruments and real-time processing. According to the composer, these pieces are written “in the
manner of a personal diary” (Matalon, interviewed by Benadon, 2005, 18). Traces III (for French
horn and live electronics), IV, and V (clarinet and live electronics) also constitute a suite called
Nocturnes, for instruments, live electronics, and voices of a man (III), a child (IV), and a woman
(V). This suite is based on texts by Argentine writer Alan Pauls. The texts are also written in the
form of diaries. Traces III depicts the diary of a man, Traces IV, the diary of a child, and Traces
V, the diary of a woman (L’homme qui meurt, L’enfant qui attend, et La Femme qui fuit).
6
Martin Matalon (b. 1958, Argentina) received his Bachelor degree in Composition from the Boston
Conservatory of Music in 1984, and two years later his Masters degree from the Juilliard School of
Music. In 1989, he founded Music Mobile, a New York-based ensemble devoted to the contemporary
repertoire (1989-96). In 1993, having settled in Paris, the composer collaborated for the first time with
IRCAM and worked on La Rosa profunda, music for an exhibition organised by the Pompidou Centre on
The Universe of Borges. IRCAM commissioned a new score for the restored version of Fritz Lang's silent
film, Metropolis. After that considerable work, Martin Matalon turned to the universe of Luis Buñuel,
consecutively writing scores for three legendary and surrealistic films by the Spanish director: Las Siete
vidas de un gato (1996), for Un Chien andalou (1927), Le Scorpion (2001) for L'Age d'or (1931) and
Traces II (la cabra) (2005) for Las Hurdes (terre sans Pain) (1932). Matalon has written for, among
others, the Orchestre de Paris, Orchestre National de France, Orchestre National de Lorraine, the
Ensemble Intercontemporain, Les Percussions de Strasbourg, Court-circuit, and Ensemble Modern. From
2004 to 2008, he is a visiting professor at McGill University, and, from 2005 to 2009, he is composer in
residence at the electronic studios of La Muse en Circuit. (Martin Matalon website, 2008)
40
In Traces IV, the marimba receives many sound treatments, extending the sonic possibilities
of the instrument in different ways: range, sustain time, timbre, and spatialization. Moreover, the
use of mokubios (very high Japanese woodblocks), added to the upper register of the marimba,
also acoustically expands the range and timbre of the instrument.
5.3.2.
The role of technology
The electronic part of the revised version of Traces IV is a mix of audio files and live sound
processing. Originally, the piece used only live-electronic processes, but because of constant
problems with feedback when using some of these real-time sound processes (especially
resonators), they were replaced by audio files. According to Matalon:
Before all the treatments were triggered by the soloist, but I had a lot of problems of
feedback with resonators (which I used quite a bit) so I decided to prerecord everything
that uses a resonator and use them as Sound Files. The first and second sections have a
mix of prerecorded material and real-time. As the prerecorded files were the length of
each movement I had to use a click (for the 1rst and 2nd mov) and I attached the real time
treatments to the click so the performer doesn't have to bother triggering the treatments
with the pedal. In the last sections I have not used resonators so you will have to trigger
the treatments with a pad or pedal. (Matalon, 2008)
Since the sound files are actually replacing real-time sound treatments, their materials are
directly derived from the material played during the performance. For example, the audio files
played in the first section of the piece contain pre-recorded artificial resonances (originally
created in real-time) of the marimba part. In the second section, the audio file is the actual
marimba part, with the addition of artificial resonances. This audio file is looped throughout the
second section.
The sound treatments used in the piece are divided into eight categories. A Max patch
controls all the sound processes and triggers each treatment in the right moment. Most of the
treatments are combined and used in chains. A very similar Max patch is used for Traces III, IV
and V. Matalon uses the same sound treatments in these pieces in order to assure a unity to these
three works, since they are part of the same trilogy. Moreover, to reinforce this unity, he applies
models of resonances and filters of the other instruments of the trilogy (clarinet and French horn)
to the sound treatments used in the marimba.
The eight modules of effects used in the piece are:
1) Polyphonic FFT filters
41
2) Granular synthesis (munger)
3) Resonance models
4) Harmonizers
5) Polyphonic delays
6) Reverb
7) Buffers (used for recording and playback)
8) Polyphonic ring modulators
Spatialization is another very important effect in the piece and is used in association with all
the eight modules. The sounds of the piece are projected by six speakers placed around the
audience. An object in Max/MSP called ‘Spat~’ is responsible for the real-time spatialization.
Processed sounds can thus be projected from different locations and can move in space.
Spat~ is a configurable real-time spatial processor integrating the localization of sound
events with room acoustic quality. […] The processor receives sounds from instrumental
or synthetic sources, adds spatialization effects in real time, and ouputs signals for
reproduction on an electroacoustic system (loudspeakers or headphones). The general
approach taken in Spat~ can be characterized by the fact that it gives the user the
possibility of specifying the desired effect from the point of view of the listener rather
than from the point of view of the device or process used to generate that effect. (SPAT~
reference manual, 2006)
The complex electronic part of Traces IV is essential to the work. It is controlled by a Max
patch that is also very complex and does not include clear instructions. Because of this, despite
being commercially rented material, the patch had to be debugged before it would work reliably.
5.3.3.
Performance issues
The first issue to be addressed here is synchronization. In the original score the
synchronization was achieved by the use of a MIDI pedal connected to the computer. Pressing
the pedal, the performer could trigger all the sound treatments of the piece while playing.
However, in the revised edition of the piece, some real-time sound treatments have been replaced
by audio files. To maintain the illusion of ‘live electronics’, the synchronization between
performer and electronics has to be precise. This synchronization is controlled by a click track.
In fact, with the use of the click track the patch can automatically trigger not only the audio files,
but also all the sound-treatment events of the piece. Since some of these events happen at very
precise moments, the performer must remain very precisely synchronized with the click track. In
42
Bar 3 (FIG. 15), for example, the patch records a short sample of the D flat tied between the first
and second beats. This sample is used in a granular synthesis effect (called ‘munger’ in the
score) based on looping the recorded grain, which creates a rich prolongation of the D flat. Since
the grain is very short (less than 50 milliseconds), the performer has to play very precisely to
ensure the effect will occur correctly.
FIG. 15. Traces IV: Bars 1 to 3
In the third section, where there is no click track, the performer uses a pedal to trigger events.
However, the pedal does not allow a great deal of freedom, since the score still requires a lot of
precision. Someone following the score can trigger the events of this section, and so the pedal
can also be replaced with a trigger button controlled by an extra person. In the third section, the
patch also creates artificial resonances of accented notes, so the performer must be very precise
with dynamics. As in Anamorfoses, this patch also uses an amplitude follower to detect accents.
Amplification and diffusion are also important issues. The piece utilizes spatialization using
six channels, so that the audience is surrounded by the sounds of the piece. The marimba and
electronic sounds must be properly balanced, but it is impossible for the performer to judge this
balance, since he cannot know exactly how the sound is being projected into the hall. Someone
else must therefore be in charge of the balance and sound diffusion, and the performer must
place his or her trust in this person. Sound adjustment is also crucial to make some of the effects
work properly. Although some technical problems were previously solved by the composer, in
the revised edition some sound treatments can still cause feedback problems. During rehearsals,
the system overloaded many times, creating a terribly loud noise, which could have damaged the
loudspeakers. Since it was impossible to know what in the patch was causing the feedback, the
43
only solution was to reduce the input volume at those problematic moments, and, to compensate,
increase the output level in order to keep the effect of the sound processing audible.7
In this piece the main challenge for the performer, in order to make the electronics work
properly, is to play as precisely as possible. The use of a pedal can also be an issue, although in
my performance it was replaced by the use of the space bar on the computer keyboard, controlled
by an extra person. In fact, a performance of Traces IV requires an assistant to control the
electronics. This person must be in charge of operating the patch, of controlling input levels (to
avoid feedback), output levels and balance. The relation between performer and assistant is very
important in this case, since the assistant plays a crucial role in the performance.
5.4.
5.4.1.
Fernando Rocha: Improvisation for Hyper-kalimba (2008)
General information
The Hyper-Kalimba is a Digital Musical Instrument created by Fernando Rocha with the
technical assistance of Joseph Malloch and the support of the IDMIL (the “Input Devices and
Music Interaction Laboratory”), directed by Prof. Marcelo Wanderley at McGill University. It
consists of a kalimba (a traditional African thumb piano) augmented by the use of sensors which
control various parameters of the sound processing. It has been used in concerts since October
2007, both in improvisational contexts and in written pieces (e.g., A la luna by Fernando Rocha
and Ricardo Cortes). Before commenting on the technical aspects of the instrument, I will give a
brief overview of the concept of Digital Musical Instruments, according to Miranda and
Wanderley (2006).
An overview of the concept of Digital Musical Instrument:
A Digital Musical Instrument (DMI) is an instrument consisting of two separate units: a
control surface (or gestural controller) and a computer system that generates sounds in real-time.
Gestural controllers are objects capable of recognizing human gestures and transforming them
into data to be sent to a computer. A joystick is a well-known example of a gesture controller. In
a DMI, the data (input) coming from the gestural controller is transferred to a computer system
that processes it and generates sounds (output).
7
The composer is currently working with his assistant to fix the patch and make it more reliable. A
version with fixed media in six channels is also being prepared.
44
A special characteristic of DMIs is that the gestures of the performer are not physically
linked to the sound production. Unlike most acoustic instruments, the controller surface and
sound generator are two separate units. Therefore, a DMI does not naturally present a fixed
relation of cause-effect between gestures and sounds. Such a relation is inherent to acoustic
instruments and had been taken for granted throughout music history, until the advent of
electronic means. In DMIs, any movement can convey any sound. The process of linking
gestures to sounds is called ‘mapping’, and it is a distinctive aspect of digital musical
instruments.
FIG. 16. A possible approach to the representation of a DMI (Miranda and Wanderley, 2006, 3)
DMIs can be classified according to their resemblance to existing acoustic instruments. Four
general types can be found: ‘augmented musical instruments’; ‘instrument-like gestural
controllers’; ‘instrument-inspired gestural controllers’; and ‘alternate gestural controllers’.
Instrument-like controllers imitate the behavior of an acoustic instrument, e.g., electronic
keyboards and electronic drums. Instrument-inspired controllers use some common features of
traditional instruments without imitating them exactly. Examples include some models of air
percussion, such as the ‘Buchla lightning’ and the ‘Marimba Lumina’ (Buchla and associates
website, 2008). Alternate controllers are not modeled on traditional musical instruments, and, in
principle, these controllers can take the form of almost any object. Moreover, some controllers
do not even imply manipulation of an object. These are referred to as free-gesture controllers,
open-air controllers, or virtual musical instruments (Mulder, 1994). Movements in the air, for
instance, can be captured by electric field sensing (capacitive field), infrared signals, and
ultrasound transducers. The Theremin is an early example of an alternate free-gesture controller,
45
and it uses capacitive sense technique (electric field sensing) (Miranda and Wanderley, 2006,
20).
Augmented musical instruments, also called hybrid or hyperinstruments, are created by
adding sensors to traditional instruments. These additional sensors give performers control over
other parameters besides the sounds the instrument normally produces. One advantage of
augmented instruments, if compared to less ‘instrument-like controllers’, is that performers can
use the technique they have already developed with the acoustic instrument and its acoustic
sound.
Sensors added to the instrument can capture both gestures that are made to produce the
normal sounds of the instrument, as well as accompanying gestures that performers often make
while playing. If new gestures are proposed to the performer, they should not interfere too much
in the playing technique. Thus, the choice of sensors and the way they will be used and placed on
the instrument should be carefully studied. One example is the use of pedals in a violin
performance. Violinists are not used to pressing pedals when playing, so this new gesture can
interfere in the performance unless they have enough time to learn this ‘new technique’.
Examples of augmented instruments include the hypercello created by Tod Machover and a zarb
with sensors developed at IRCAM by percussionist Roland Auzet (Auzet, 2000).
5.4.2.
The role of technology
In the development of the hyper-kalimba (FIG. 17), one of the main goals was to preserve the
traditional technique and sound characteristics of the kalimba, while adding new performance
and sound possibilities. For that reason, the sensors added to the instrument were chosen in order
not to interfere with its traditional playing technique. The sensors used are a piezo microphone,
two pressure sensors, and three accelerometers. In fact, the two pressure sensors added to the
back of the instrument are inspired by holes that are present in some kalimbas, and which
performers use to control tremolos.
46
FIG. 17. From left to right: front view of the hyper-kalimba; back of a kalimba (with 2 holes);
back of the hyper-kalimba (with the sensors).
The pressure sensors detect the amount of pressure applied to them. The first two sliders in
the Max patch shown in FIG. 18 register this: the figure indicates that the right sensor is not
being pressed (value=0) and the left sensor is being moderately pressed (value=300, on a scale
from 0 to 1000). With the use of accelerometers, it is possible to measure the tilt of the
instrument. The dial in the center shows the instrument’s left-right tilt position (FIG. 18 indicates
that the instrument is slightly tilted to the left). The next slider shows the instrument’s front-back
rotation, from a downward position of the front of the instrument to an upward one (in this case
the front of the instrument is in an upward position). Finally, the patch is able to recognize when
the instrument is upside-down (as the box is not marked, the instrument is not upside-down).
A sensor interface is used to convert the data from the sensors into digital data and send it via
USB to the computer. The acoustic sound of the kalimba is picked up by a contact microphone
and also sent to the computer. By using a custom-made Max patch, which I designed for the
instrument, the data from the sensors is mapped to control parameters of the sound processing.
The mapping used in the instrument preserves the melodic characteristic of the instrument, and
also creates new sound possibilities.
47
FIG. 18. Hyper-kalimba: patch showing input data from sensors
Current mapping of the instrument:
Pressure Sensor 1 (‘pressureR’): controls a pitch transposition effect. The harder it is pressed,
the greater the transposition.
Pressure Sensor 2 (‘pressureL’): controls a ring modulation effect. The pressure applied to
the sensor determines the frequency used to modulate the sound of the kalimba. When the
pressure is low, this frequency is less than 16Hz, which creates an effect similar to a tremolo.
Pressing harder causes the frequency to become higher and the effect is a change in the timbre of
the instrument, making it sound bell-like.
Position (horizontal axis): controls a delay effect. When tilted to the left the delay is panned
to the left; when tilted to the right, it is panned to the right.
Position (vertical axis): Pointing the front of the instrument down adds reverb; pointing it up
adds very short delays. Maintaining an extreme upward position can generate a feedback effect.
The vertical axis also influences the pitch transposition process. When the instrument is
level, the maximum range of the transposition is one half tone up (allowing the instrument to
play chromatic passages). Pointing the front of instrument down causes the transposition to go
down. The further down the instrument is pointed, the larger the downward range of the
transposition will be. Thus, the lowest pitch can be obtained when the instrument is pointing
down and strong pressure is applied to the right pressure sensor. Conversely, the highest notes on
the instrument can be obtained by pointing it up and strongly pressing the right sensor.
48
Upside down: When the instrument is upside down, the loop being played (if there is one) is
altered in speed and volume. The speed becomes erratic and the volume slightly decreases. After
45 seconds the loop goes to normal speed and fixed volume. If the instrument remains upside
down, the loop starts to fade out after these 45 seconds.
In addition to these sensors, two pedals were used in the performance. The information
coming from these pedals is shown on the right side of figure 18. Pedal 1 is an ON/OFF switch;
and pedal 2 triggers one of 6 positions.
Pedal 1: When the pedal is pressed (ON), it freezes the value of all the sensors, so the
parameters for sound processing remain fixed until the pedal is pressed again (OFF position).
The only exception is the right pressure sensor, which is also fixed when the pedal is ON, but as
soon as it is pressed for more than 100 milliseconds, it starts to work again. In the future, this
pedal will eventually be replaced by a button attached to the instrument.
Pedal 2: controls the recording and playback of the audio loops used in the performance.
When the pedal is pressed (status 1), the patch records the live sound until it is released (status
2). When the pedal is pressed (status 3) and then released (status 4) again, the loop is played in
reverse at a speed that makes the length of the loop the same as the time interval between status 3
and 4. Status 5 (pressing the pedal) starts a fade out, and status 6 (releasing) ends the loop. The
next time the pedal is pressed this cycle starts again (status 1).
5.4.3.
Performance issues
The sensors and the patch provide the kalimba with new performance gestures, e.g., right-left
tilting and front-back rotation, as well as new sound possibilities, such as pitch bend, tremolo,
extended range, control of reverberation, delay, etc.
In the performance of an improvisation with the hyper-kalimba, it is possible to highlight
traditional characteristics of the kalimba and its repertoire: its melodic aspect and the use of
ostinatos, here transformed by electronic treatments. All the sounds produced are the result of
manipulation in real time of the kalimba sound. This helps to preserve the melodic characteristic
of the instrument. In addition, the sensors added to the instrument do not interfere with the
traditional technique. Rather, they create new gestures that complement the technique of the
instrument. These new performance gestures must be mastered by the performer. Thus, although
in one sense the hyper-kalimba is a new musical instrument, it also allows performers to take
49
advantage of their existing skills. While exploring this new vocabulary of gestures, the performer
is also able to explore the new sound possibilities of the instrument.
One of the main issues of performing DMIs, and in particular the hyper-kalimba, is ‘latency’.
Latency is the delay between an action by the performer and its sonic result. Latency occurs
because of the time required to process the input data and generate the output. Modern computers
can process data very quickly, sometimes reducing the latency to imperceptible levels. A small,
but still noticeable, latency is present in the sound production of the hyper-kalimba. This latency
creates some issues for the performer, especially related to playing precise rhythms and
synchronizing them with other phrases. In this improvisation, coordination must be achieved
between the hyper-kalimba part and the recorded loops. The performer must thus get used to
playing with a constant slight delay. Because the acoustic sound of the instrument is very soft – it
is covered by the amplified and processed sound - the latency can be acoustically imperceptible
to the audience, although it may be visually noticeable (latency between the audience’s
perception of the gesture of the performer and the sound produced).
During the development of the hyper-kalimba I was involved with technological,
compositional, and performance aspects. As a result, I was able to observe how decisions related
to building the patch, to performing the instrument, and to creating a musical structure directly
influenced one another. As I developed the instrument I was able to explore new gestures that
could potentially be mapped. These mapping choices then facilitated certain sounds or effects,
and this had consequences in the musical material. Conversely, when I sought specific sounds in
the improvisation I had to find effective ways to create them that would not interfere with normal
performance gestures.
An example of these interrelations is the control of the transposition effect. In the first
mapping of the instrument, which was used for the work A la luna, the transposition was
controlled by the right pressure sensor. The transposition was always upward, and its value was
determined by the amount of pressure applied. Strong pressure could create up to a four octave
transposition, producing a very different timbre: one which was explored in A la luna. However,
small transpositions, like a half tone, were difficult to achieve accurately, as the performer must
press the sensor very carefully (pressure close to 200, on the scale from 0 to 1000, for a half tone
transposition). In order to make the performance of chromatic passages easier, without excluding
the possibilities of larger transpositions, the position of the instrument (vertical axis) was mapped
50
to control the range of the modulation. In this new mapping, when the instrument is pointing up
the control of the transposition is the same as it was in the previous mapping. However, when the
instrument is level, the pressure-to-transposition relationship changes so that it is considerably
easier to play half tone transpositions; any pressure larger than 600 produces a half tone
transposition (the maximum transposition at this position). Moreover, mapping the vertical axis
to the range of the transposition also allows for the possibility of a downward transposition
(when the front of instrument is pointed down).
This feature is used in the beginning of Improvisation for Hyper-kalimba; the first phrase is
played at a lower transposition (one octave down), and recorded. This recording is later looped
as the performer plays variations of the same phrase, untransposed (i.e., one octave higher).
Working in the development and performance of the hyper-kalimba thus clearly illustrated for
me how a close understanding of the acoustic and technological aspects of electronic music can
provide more intimacy with electronic sounds and more subtle control over them.
51
6. Conclusion
6.1.
Possible responses to the initial questions
In the introduction to this paper I posed a number of questions relating to performance issues
in live electronics. In the light of the preceding discussion, I would now like to propose some
possible responses to these questions.
(1) Which aspects of music technology (if any) should the performer be familiar with when
performing this repertoire? How can this knowledge produce a better performance?
Although the performance of computer-based mixed works does not require the performer to
learn Max/MSP programming (which can be a very complex task), some skills related to music
technology are still useful. In order to facilitate the rehearsal process, for example, the performer
should learn how to set up MIDI and audio interfaces. Moreover, the performer must learn how
to operate the Max patch written for the piece. If the patch needs to be debugged or updated, the
performer will normally need to rely on an expert; but once the patch has been created, the
performer must be familiar with its operation in order to use it properly during rehearsals and
sometimes even concerts. The performer is aided in this task when detailed instructions on how
to operate the patch are included in the score. Being able to practice with the electronics on one’s
own certainly helps the performer to develop a more subtle control over the piece.
(2) What is the interaction between performer and electronics in a mixed work? (a) How can
synchronization be achieved?
Mixed works can present different levels of interaction and different ways of implementing
this interaction. In works with fixed media, performers usually have to adjust their timing to the
computer. In some cases, this can be done just by listening to the electronic part, but in more
complex cases other cues are necessary. A click track is one option, as is a timer or other types of
visual cues. Finally, the composer can have the electronic part follow the performer, using pedals
(or pads) to trigger the audio files. An additional person can also follow the score and trigger the
audio files.
There are other more complex ways of making the computer follow the performer. These are
often used in pieces with live electronics. Score following is one technique based on the
recognition of one or more aspects of the performance (pitch, timing, dynamics).
(2) (b) How are electronic and acoustic sounds combined together and diffused?
52
Electronic and acoustic sounds can be conceived of as a single sound source produced by a
virtual augmented instrument, or they can be created as two distinct musical elements, like a duo
between the instrument and electronics. Amplification and diffusion have important roles to play
in blending the acoustic and electronic sounds. Wooden Stars, Anamorfoses and Traces IV all use
different approaches to combining and projecting acoustic and electronic sounds. As the sound is
projected via loudspeakers, composers can also manipulate its placement in a virtual 3D
environment. This spatialization effect can have an important impact on the perception of the
work by the audience.
(3) How are performance decisions affected by an electronic part in a piece?
New gestures often need to be learned. In the case of an extended instrument, the performer
has to master the control of the ‘new instrument’. Furthermore, in an interactive system, the
performer should be aware of which properties of the acoustic sounds must be recognized and
followed in real time by the software being used. These may include parameters such as pitch,
timbre, duration, attack, accents, and dynamics. Performance decisions related to these properties
can be affected by this interaction, as is the case in Anamorfoses.
(4) Are there differences between performing/practicing an acoustic work and
performing/practicing a mixed work?
There are certainly differences, such as the need for extra equipment (computer and sound
system) for rehearsals and performance, and the fact that the sound is projected into the space by
speakers. For this reason, it is usually difficult for the performer to hear the piece in rehearsal in
exactly the same way that it will sound in concert. This can add a great deal of uncertainty to the
performer’s experience, which is often the reason that performers are afraid of working with
electronics. Ideally, the performer should find a way to have rehearsals with all the equipment
that will be used in the performance before the actual day of the concert.
More important than the differences between performing/practicing an acoustic work and
performing/practicing a mixed work, however, are the similarities: in both cases, the performer
interprets a musical structure, adjusting his playing according to the style of the music, to the
composer’s intention, and also to the characteristics and limitations of the instrumentation,
equipment, and hall. Adjustment is part of the performance of any instrument, in any style. For
example, string players use different amounts of vibrato when they play baroque music, romantic
53
music, or contemporary music. The adjustments that performers make in order to play with
electronics can be thought of in the same way as these ‘acoustic’ adjustments.
6.2.
Final thoughts
A new repertoire requires new skills for the performer. In performing with electronics, these
new skills include:
- New instrumental gestures (e.g., pressing a pedal, as in Wooden Stars and
Anamorfoses; or any gesture captured by a sensor and used as parameter to control a
sound process, such as tilting the instrument in the performance of the hyper-kalimba);
- New relationship to the equipment (not only to his or her instrument anymore, but also
to electronic devices, such as MIDI and Audio interfaces). Performers may need to
learn some basic concepts of music technology;
- Understanding the result of the sound processing in the piece. Performance decisions
can be affected by the intended result of the sound process. For example, performance
decisions about dynamics in Anamorfoses and Traces IV were greatly influenced by the
electronic part.
There are many different ways in which performers can interact with electronics. Each one of
these presents advantages and drawbacks. In the compositional process, one initial decision is
whether to use fixed media or live electronics. Performing with live electronics usually allows
more freedom to the performer than performing with fixed media, especially concerning tempo.
This can be considered to be an advantage of more interactive systems. However, processing
sound in real time is much more complex than playing a fixed audio file. It usually requires more
equipment, software, and more time to set up and adjust the sound system. In addition, the results
are more open to uncertainties. A tape part is, in that sense, more reliable, although not as ‘fresh’
as live electronics. Choosing between these two forms often involves some kind of compromise.
If the emphasis is on obtaining a precise and complex array of sounds, for example, fixed media
may be preferred. On the other hand, if the emphasis is on an intimate temporal relationship
between the electronic and instrumental parts, or on extending the sound possibilities of the
instrument, more interactive systems may be a better option. Very often, mixed works combine
both approaches: the blending of audio files and real-time sound processing, as in Traces IV, or
some kind of real-time control of audio files, as in Wooden Stars.
54
In works with live electronics, the choice of human inputs to be used within the interactive
system should also be carefully considered. An extra person can be in charge of controlling the
electronics; but the performer can also interact with the electronic part by using MIDI devices,
sensors, or the sound of the acoustic instrument itself. As stated before, no approach is
necessarily better or worse than another. It is always dependent on the musical context.
Collaboration between composers and performers is always a good way to find solutions that
work well for both the composition’s structure and the performance. For performers,
collaboration is important not only with composers, but also with technical assistants, who often
have the role of adjusting how the sounds will be heard by the audience (amplification,
diffusion).
Finally, performers should be aware which of their gestures and/or sounds are being captured
by the interactive system, and how they affect the real-time processes, since these will affect
performance decisions. Performers should understand that adjusting their playing to make the
electronics work can be perfectly normal. In fact, they should assume a more active role in this
repertoire, since the electronic part can be directly influenced by the way they play. Making the
electronics work properly is not only a problem for the composer or a music technician to solve,
but it is also part of the performer’s role in this repertoire. Controlling an electronic part can be
very stimulating and interesting to performers, since it can allow more sound possibilities and
performance capabilities. Learning basic skills related to interactive music systems can represent
a challenge, but it can be very helpful, and even necessary, for performers to reach a more
intimate relation to electronics, thus allowing them to have more control, confidence and
pleasure when playing with computer-based live electronics.
55
7. References
1. Alvarez, Javier. “Rhythm as motion discovered.” Contemporary Music Review 3: 203-231,
1989.
2. Appleton, Jon and Ronald C. Perera (eds.). The Development and Practice of Electronic
Music. New Jersey: Prentice-Hall, 1975.
3. Auzet, Roland. “Gesture-following Devices for Percussionists.” Trends in Gestural Control
of Music. CD-ROM. Eds. M. Wanderley and M. Battier. Paris: Ircam - Centre Pompidou,
2000.
4. Belet, Brian. “Live performance interaction for humans and machines in the early twentyfirst century: One composer's aesthetics for composition and performance practice.”
Organised Sound 8 (3): 305-312, 2003.
5. Benadon, Fernando. “An interview with Martin Matalon.” Computer Music Journal 29 (2):
13-22, 2005.
6. Bongers, Bert. “Physical Interfaces in the Electronic Arts: Interaction Theory and
Interfacing Techniques for Real-time Performance.” Trends in Gestural Control of Music.
CD-ROM. Eds. M. Wanderley and M. Battier. Paris: Ircam - Centre Pompidou, 2000.
7. Cage, John. Silence. Hanover: Wesleyan University, 1961.
8. Chadabe, Joel. “The History of Electronic Music as a Reflection of Structural Paradigms.”
Leonardo Music Journal 6: 41-44, 1996.
9. ______. "The performer is us." Contemporary Music Review 18(3): 25-30, 1999.
10. ______. Electric sound - the Past and Promise of Electronic Music. Upper Saddle River,
NJ: Prentice Hall, 1997.
11. _______ and Jan Williams. “Collaboration II: A Conversation between Joel Chadabe and
Jan Williams.” Percussive Notes Research Edition 25 (3): 99-103, 1987.
12. Emmerson, Simon. "Acoustic/Electroacoustic: The Relationship with Instruments."
Journal of New Music Research 27 (1-2): 146-164, 1998.
13. _______. “‘Losing touch?’ the human performer and electronics.” Music, Electronic Media
and Culture. Ed. S. Emmerson. Aldershot; Burlington, USA: Ahsgate, 2000.
14. Esler, Robert. “Digital Autonomy In Electroacoustic Music Performance: Re-Forging
Stockhausen.” Proceedings of the International Computer Music Conference. New
Orleans, 2006.
15. Freire, Sérgio. "Implementação da síntese FM em uma linha de atraso variável e suas
possíveis aplicações no processamento digital de sons.” Anais do X Simpósio Brasileiro de
Computação e Música. Belo Horizonte: 219-225, 2005.
16. _______. “patches seção 1 e 3 - texto sobre a peça.” E-mail to Fernando Rocha, January
18th, 2008.
17. Garnett, Guy. E. “The Aesthetics of Interactive Computer Music.” Computer Music
Journal 25 (1): 21-33, 2001.
56
18. Griffiths, Paul. A Guide to Electronic Music. London: Thames and Hudson, 1979.
19. Hunt, Andy and Marcelo Wanderley. “Mapping performer parameters to synthesis
engines.” Organised Sound 7(2): 97–108, 2002.
20. Spat~ Reference Manual. “Spatialisateur: The Spat~ Processor and its Library of Max
Objects.” Writ. Jean-Marc Jot. Ed. Marc Battier. Paris: IRCAM, 2006.
21. Kimura, Mari. “Performance practice in computer music.” Computer Music Journal 19 (1):
64-75, 1995.
22. _______. “Creative process and performance practice of interactive computer music: A
performer's tale.” Organised Sound 8 (3): 289-296, 2003.
23. Machover. Tod. “Instruments, Interactivity, and Inevitability.” Proceedings of the 2002
International Conference on New Interfaces for Musical Expression (NIME02). Dublin:
NIME, 2002.
24. Makan, Keeril. “An interview with Edmund Campion.” Computer Music Journal 28(4):
16-24, 2004.
25. Manning, Peter. Electronic and Computer Music. New York: Oxford University Press,
2004.
26. McNutt, Elizabeth. “Performing Electroacoustic Music: A Wider View of Interactivity.”
Organised Sound 8 (3): 297-304, 2003.
27. Matalon, Martin. “Re: Traces IV _ performance in Montreal.” E-mail to Fernando Rocha,
January 14th, 2008.
28. Menezes, Flo. “For a morphology of interaction.” Organised Sound 7 (3): 305-311, 2002.
29. Miranda, Eduardo and Marcelo Wanderley. New Digital Instruments: control and
interaction beyond the keyboard. Middleton, Wisconsin: A-R Editions, 2006.
30. Mulder, Axel. "Virtual Musical Instruments: Accessing the Sound Synthesis Universe as a
Performer." Proceedings of the First Brazilian Symposium on Computer Music. Belo
Horizonte, Brazil: Universidade Federal de Minas Gerais. 243-250, 1994.
31. Neidhöfer, Christoph. “Bruno Maderna’s Serial Arrays.” Music Theory Online 7 (1), 2007.
32. Nicholls, David (ed.). The Cambridge Companion to John Cage. Cambridge, U.K.; New
York: Cambridge University Press, 2002
33. Nyman, Michael. Experimental Music: Cage and beyond. New York: Schirmer, 1974.
34. Orio, Nicola; Serge Lemouton; and Diemo Schwarz. “Score Following: State of the Art and
New Developments.” Proceedings of the 2003 Conference on New Interfaces for Musical
Expression (NIME-03), Montreal, Canada: 36-41, 2003.
35. Pinch, Trevor, and Karin Bijsterveld. “Should one applaud? Breaches and Boundaries in
the Reception of New Technology in Music.” Technology and Culture 44: 536-559, 2003.
36. Pritchett, James. The music of John Cage. Cambridge [England]; New York: Cambridge
University Press, 1993.
57
37. Puckette, Miller and Cort Lippe. “Score Following in Practice.” Proceedings of the ICMC
1992: 182–185, 1992.
38. Rowe, Robert. “The Aesthetics of Interactive Music Systems.” Contemporary Music
Review 18(3): 83-87, 1999.
39. Russolo, Luigi. “The Art of Noises.” 1913. Audio culture: readings in modern music.Ed.
Christoph Cox and Daniel Warner. New York: Continuum. 10-14, 2004.
40. Saltz, David. Z. “The Art of Interaction: Interactivity, Performativity, and Computers.” The
Journal of Aesthetics and Art Criticism 55 (2): 117-127, 1997.
41. Séjourné, Emmanuel. Withered Leaves - New Birth. CD liner notes. Christal SCACD
54221, [s.d.].
42. Schick, Steven. The Percussionist’s Art: same bed, different dreams. Rochester, NY:
University of Rochester Press, 2006.
43. Schloss, W. Andrew. “Using contemporary technology in live performance: The dilemma
of the performer.” Journal of New Music Research 32 (3): 239-242, 2003.
44. Stockhausen, Karlheinz; Jerome Kohl. “Electroacoustic Performance Practice.”
Perspectives of New Music 34 (1): 74-105, 1996.
45. Stroppa, Marco. “Live electronics or...live music? Towards a critique of interaction.”
Contemporary Music Review 18 (3): 41-77, 1999.
46. Varèse, Edgar. “The liberation of sound.” 1936-1962. Audio culture: readings in modern
music. Ed. Christoph Cox and Daniel Warner. New York: Continuum. 17-21, 2004.
47. Wanderley, Marcelo and Philippe Depalle. “Gestural control of sound synthesis.”
Proceedings of the IEEE 92 (4): 632 – 644, 2004.
48. Wanderley, Marcelo and Marc Battier (eds.). Trends in Gestural Control of Music. CDROM. Paris: IRCAM - Centre Georges Pompidou, 2000.
49. Wetzel, David Brooke. “A model for the conservation of interactive electroacoustic
repertoire: analysis, reconstruction, and performance in the face of technological
obsolescence.” Organised Sound 11(3): 273-284, 2006.
50. Winkler, Todd. Composing Interactive Music: Techniques and Ideas using Max. MIT
Press, Cambridge, Massachusetts; London, England, 1998.
Websites:
51. Buchla and associates < www.buchla.com> April 6th, 2008
52. Cycling ’74 <www.cycling74.com> April 6th, 2008
53. Geof Holbrook < www.music.mcgill.ca/~holbrook> April 6th, 2008
54. Martin Matalon < http://martin.matalon.free.fr> April 6th, 2008
55. Obsolete <www.obsolete.com/120_years> April 6th, 2008
56. Pure Data <http://puredata.info> April 6th, 2008
58
57. Réseaux <www.reseauxconcerts.com> April 6th, 2008
58. SensorWiki <www.sensorwiki.org/index.php/Sensors> April 6th, 2008
59. Sérgio Freire <www.musica.ufmg.br/~sfreire> April 6th, 2008
60. SuperCollider <http://supercollider.sourceforge.net> April 6th, 2008
Scores:
61. Almada, Daniel. 2002. Linde. 1994. Strasbourg: François Dhalmann, 2002.
62. Campion, Edmund. 2003. Losing Touch. 1993. Paris: Billaudot, 2003.
63. Freire, Sérgio. Anamorfoses, 2007.
64. Holbrook, Geof. Wooden Stars, 2006 (rev. 2008).
65. Matalon, Martin. Traces IV. 2006. Paris: Billaudot, 2007.
66. Saariaho, Kaija. Six Japanese Gardens. 1993. London: Chester Music, 1996.
59
Appendix: list of equipment
This appendix contains a list of equipments required for the performance of each one of the
four pieces discussed in Chapter 5.
Wooden Stars
•
•
•
•
•
•
•
•
•
1 Computer (minimum 1.2 GHz / 512MB RAM) running OSX 10.4 and Max/MSP 4.6;
MIDI Interface;
1 MIDI foot pedal;
Audio Interface;
Mixing board with at least 8 channels;
Reverb unit;
6 Microphones;
2 speakers;
1 monitor for performer.
Anamorfoses
•
•
•
•
•
•
•
1 Computer (minimum 1.5 GHz / 1KB RAM) running OSX 10.4 and Max/MSP 4.6;
MIDI Interface;
1 or 2 MIDI foot pedals;
Audio Interface;
Mixing board with at least 5 inputs and 4 outputs;
2 or 3 Microphones;
2 speakers placed close to the vibraphone.
Traces IV
• 1 Macintosh Computer (Intel Core 2 Duo, 2.33 GHz / 1.5GB RAM) running OSX 10.4
and Max/MSP 4.6;
• MIDI Interface, including MIDI Mixer with 8 channels;
• 1 MIDI foot pedal;
• Audio Interface with at least 2 inputs and 8 outputs (preferably ADAT);
• Mixing Board with at least 10 inputs and 8 outputs;
• 2-3 Microphones;
• 6 speakers (placed around the audience);
• Headphone for musician.
60
Improvisation for hyper-kalimba
•
•
•
•
•
•
•
•
•
•
1 Computer (1.5 GHz / 1.25KB RAM) running OSX 10.4 and Max/MSP 4.6;
1 kalimba with pick up (piezo microphone) built in;
2 pressure sensors (force-sensitive resistor), 3 accelerometers;
Arduino sensor interface (USB);
2 MIDI foot pedals;
MIDI Interface;
Audio Interface;
Mixing board;
2 speakers;
1 monitor for performer.
61