声音和视频选集: 节目说明
Media Compositions and
Performances: Doug Van Nort,
Curator
Curator’s Note
It is with great pleasure that I have
curated Computer Music Journal’s
2014 声音和视频选集. 我
decided upon a theme of distributed
agency in digitally mediated perfor-
曼斯. 尤其, my interest here
is to showcase a multiplicity of ways
in which shared agency manifests
between human performers, 还有
as between human and machine per-
前者. The collection begins with
“Part A: Distributed Composition”;
this section presents audio/video
documents that highlight five unique
approaches to distributing and sharing
expressive voices between composer-
performers. In these works, 这
resulting compositional voice does
not reside in one central location,
but rather is a product of collective
co-creation, at varying levels of spa-
tial and temporal remove. This set
includes a work by Chris Chafe and
同事, wherein large-scale com-
positional qualities are influenced by
global sea levels as well as by a live
观众, resulting in a piece that
is not only artful but consciousness-
raising at the same time. 相比之下
to this “outsourcing” of the details of
compositional form, the works by Pe-
dro Rebelo and The Hub both present
two very different takes on “net-
work music”: Rebelo’s work defines
a global feedback network whose
sonic character and overall shape are
the product of a large-scale intercon-
nection of disparate acoustic spaces
and performers, whereas The Hub—
the fathers of “computer network
music”—present us with a canonical
example of their ever-groundbreaking
approach to composing for shared,
living network structures. The piece
Notes: 土井: 10.1162/COMJ一 00274
Content: 土井: 10.1162/COMJ x 00276
by CLOrk (the Concordia Laptop
Orchestra) eschews the classically
calculated and precise world of the
laptop orchestra in favor of the messy
and risky world of interdisciplinary
improvisation. The result is a work
whose shared agency is a product
of listening for gestural engagement
across forms (kinetic, sonic). 最后,
Bill Hsu and Chris Burns present
a piece that intersects this world
of cross-media improvisation with
shared control at the level of their
interactive performance systems,
resulting in a document that demon-
strates the possible richness discov-
ered when sharing gestures across
媒体, between human performers,
and with the system itself.
This sharing of system-level ges-
tural and compositional forms is the
focus of “Part B: Musical Metacre-
ation.” This section highlights
cutting-edge machine improvisa-
tion systems in performance with
two top-level human improvisers:
Paul Hession on drums and Finn Pe-
ters on flute and saxophone. Hearing
these disparate systems at play with
the same performer begins to hint
at the stylistic differences of their
composer-designers, as well as the
virtuosic flexibility of the human
players. In order to bring focus to-
wards listening to these differences, 我
have decided that this section should
be audio-only. Each of these excerpts
comes from a single concert of the
same name that took place at Cafe
OTO in London in July 2014. 这
curation of this concert was the work
of Ollie Bown, and so the excellent
selection of the included systems is
purely to his credit. Aside from being
privileged to take part in the concert,
from a curatorial point of view I sim-
ply had the good sense to incorporate
these works into the in-progress cura-
tion of this collection, both because
they fit so nicely with my chosen
theme and because I could feel the
strong improvisational musicianship
on the evening of performance. I will
leave the description of each system
and piece for the program notes; taken
as a whole, I feel that these works
create an excellent counterpoint to
Part A by virtue of their cohesion
as well as a concentrated focus on
both stylistic engagement and sonic
gestural forms (as compared with
the expansive and organic crossing
of media and expressive types found
within the first set). As a collection,
I hope that you will find the diver-
sity and quality of these works as
compelling as I have, and that they
might provide for a moment to reflect
on the creative insights that may be
gained when one “loosens the reins”
on one’s own artistic control, 反而
distributing it among a collective of
listening and expressing performers,
be they present or tele-present, musi-
cal beings or meta-musical machines.
Part A – Distributed Composition
1. Polartide—Chris Chafe
Polartide started as a project for
这 2013 Venice Biennale Mal-
dives Pavilion. A team of musicians
and artists banded together at UC
Berkeley’s Center for New Media
(bcnm.berkeley.edu) to create a sound
marker that tracks sea water levels
in coastal cities. A sound marker is
an alarm of sorts that sounds out to
all members of a community within
earshot of a bell tower. The first ver-
sion worked with simulated bells, 和
this version, Spillover, works with a
live audience and a carillonneur.
The carillonneur plays a fixed score
that is a “musification” of global sea-
level data. The audience, using the
Spillover Web app, controls the speed
or tempo at which the carillonneur
plays the score. The audience controls
how fast the music plays from note
to note, and metaphorically explores
how our actions affect the rise of
global sea water.
The Polartide team includes:
Chris Chafe, Composer, 斯坦福大学-
versity Center for Computer Research
in Music and Acoustics (CCRMA)
声音和视频选集: 节目说明
119
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
C
哦
米
j
/
我
A
r
t
我
C
e
–
p
d
F
/
/
/
/
3
8
4
1
1
9
1
8
5
6
9
7
6
/
C
哦
米
_
A
_
0
0
2
7
4
p
d
.
j
F
乙
y
G
你
e
s
t
t
哦
n
0
7
S
e
p
e
米
乙
e
r
2
0
2
3
Rama Gottfried, Musician, 伯克利
Center for New Media
large ships in the port of St. John’s,
纽芬兰.
Perrin Meyer, Sound Designer, 迈耶
Sound
Tiffany Ng, Musician, Berkeley Cen-
ter for New Media
Greg Niemeyer, Artist, Berkeley Cen-
ter for New Media
The Polartide team would like to
thank the following people for their
支持: Monica Lam, June Holtz,
Sharon Eberhart, and The Open
Source Community.
Chris Chafe is a composer, improvi-
sor, and cellist, developing much of
his music alongside computer-based
研究. He is Director of Stanford
University’s Center for Computer
Research in Music and Acoustics
(CCRMA). At IRCAM (巴黎) 和
The Banff Centre (艾伯塔省), he pur-
sued methods for digital synthesis,
music performance, and real-time
Internet collaboration. CCRMA’s
SoundWIRE project involves live con-
certizing with musicians the world
超过. Online collaboration software
including JackTrip and research into
latency factors continue to evolve.
An active performer both on the net
and physically present, his music
reaches audiences in dozens of coun-
tries and sometimes at novel venues.
A simultaneous five-country concert
was hosted at the United Nations
在 2009. Chafe’s works are available
from Centaur Records and various
online media. Gallery and museum
music installations are into their
second decade with “musifications”
resulting from collaborations with
artists, 科学家, and MDs. Recent
work includes the Brain Stethoscope
项目, Polartide for the 2013 威尼斯
Biennale, Tomato Quintet for the
transLife:media Festival at the Na-
tional Art Museum of China, 和
Sun Shot played by the horns of
2. Netrooms: The Long
Feedback—Pedro Rebelo
Netrooms: The Long Feedback is a
participative network piece which
invites the public to contribute to an
extended feedback loop and delay line
across the Internet. The work explores
the juxtaposition of multiple spaces
as the acoustic, 社会的, and personal
environment becomes permanently
networked. The performance consists
of live manipulation of multiple
real-time streams from different
locations that receive a common
sound source. Netrooms celebrates
the private acoustic environment as
defined by the space between one
audio input (microphone) and output
(loudspeaker). The performance of
the piece consists of live-mixing a
feedback loop with the signals from
each stream.
Visuals by Rob King.
Pedro Rebelo is a composer, 声音
artist, and performer working primar-
ily in chamber music, improvisation,
and sound installation. 在 2002, 他
was awarded a PhD by the University
of Edinburgh, where he conducted re-
search in both music and architecture.
His music has been presented in
venues such as the Melbourne Recital
大厅, National Concert Hall Dublin,
Queen Elizabeth Hall, Ars Electron-
伊卡, and Casa da M ´usica, and at events
such as Weimarer Fr ¨uhjahrstage fur
zeitgen ¨ossische Musik, Wien Modern
Festival, Cynetart, and M ´usica Viva.
His work as a pianist and improvisor
has been released by Creative Source
Recordings, and he has collaborated
with musicians such as Chris Brown,
Mark Applebaum, Carlos Zingaro,
Evan Parker, and Pauline Oliveros.
Pedro has recently led participatory
projects involving communities in
Belfast and favelas in Mar ´e, Rio de
Janeiro. This work has resulted in
sound art exhibitions at venues such
as the Metropolitan Arts Centre in
Belfast, Espac¸ o Ecco in Bras´ılia, 和
Parque Lage and Museu da Mar ´e in
Rio de Janeiro.
His writings reflect his approach to
design and creative practice in a wider
understanding of contemporary cul-
ture and emerging technologies. Pedro
has been Visiting Professor at Stanford
大学 (2007) and senior visiting
professor at Universidade Federal do
Rio de Janeiro, 巴西 (2014). 他有
been Music Chair for international
conferences such as ICMC 2008, SMC
2009, and ISMIR 2012. At Queen’s
University Belfast, he has held posts
as Director of Education and Acting
Head of School in the School of Mu-
sic and Sonic Arts and is currently
Director of Research for the School
of Creative Arts, including the Sonic
Arts Research Centre. 在 2012 他
was appointed Professor at Queen’s
and awarded the Northern Bank’s
“Building Tomorrow’s Belfast” prize.
3. Multiple Issues—The Hub
Multiple Issues is a composite video
constructed of legacy video shots
from video footage that Hub mem-
ber Scot Gresham-Lancaster made
onstage during various American
and European performances over the
最后的 25 年. The soundtrack—made
from Hub pieces such as “WaxLips”,
“Stuck Note”, etc.—drives the jump
shot editing decided algorithmi-
cally with the Movie.Py python
program set to trigger at various
auditory thresholds. This editing
technique reflects the egalitarian
and cooperative nature of all Hub
collaborations.
The Hub, an American “computer
network music” ensemble formed in
120
电脑音乐杂志
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
C
哦
米
j
/
我
A
r
t
我
C
e
–
p
d
F
/
/
/
/
3
8
4
1
1
9
1
8
5
6
9
7
6
/
C
哦
米
_
A
_
0
0
2
7
4
p
d
.
j
F
乙
y
G
你
e
s
t
t
哦
n
0
7
S
e
p
e
米
乙
e
r
2
0
2
3
1986, consists of John Bischoff, Tim
Perkis, Chris Brown, Scot Gresham-
兰卡斯特, Mark Trayle, and Phil
Stone. The Hub was the first live
computer music band whose mem-
bers are all designers and builders
of their own hardware and software
仪器.
The Hub grew from the League of
Automatic Music Composers: 约翰
Bischoff, Tim Perkis, Jim Horton,
and Rich Gold. Perkis and Bischoff
modified their equipment for a
performance at The Network Muse
Festival in 1986 at The Lab in San
Francisco. Instead of creating an ad
hoc wired connection of computer
相互作用, they decided to use a
hub—a general-purpose connection
for network data. This was less
failure-prone and enabled greater
collaborations. The Hub was the first
band to do a telematic performance,
which took place in 1987 之间
the Clocktower and Experimental
Intermedia venues in New York.
Because this work represents some
of the earliest work in the context
of the new live music practice of
networked music performance, 他们
have been cited as the archetypal net-
work ensemble in computer music.
The Hub’s best-known piece, “Stuck
Note” by Scot Gresham-Lancaster,
has been covered by a number of
network music bands, 包括
Milwaukee Laptop Orchestra (MiLO)
and the Birmingham Laptop Ensem-
布莱 (BiLE). They have collaborated
with the Rova Saxophone Quartet,
Nic Collins, Phil Niblock, and Alvin
柯兰. They currently perform
around the world after a seven-year
hiatus that ended in 2004.
4. Dancing with
Laptops—CLOrk
Dancing with Laptops is an im-
provisatory collaboration between
Concordia Laptop Orchestra (CLOrk)
and the dance group Le Collab’Art de
Steph B. Twenty laptopists and three
dancers improvised freely without
prescribed compositional or techno-
logical restrictions and without an
assigned leader. This performance
was a first in a series of interdisci-
plinary, non-hierarchical improvised
performances designed to develop
listening, dialogical, and performa-
tive skills in collaborative settings,
which are typically democratic in
the synchronous (performances) 和
the asynchronous (规划, realiz-
英, researching) time frames. 后
two Dancing with Laptops rehearsals,
CLOrk members decided to improvise
in response to (rather than leading)
the dancers. Though arguably a hier-
archical entrance strategy, it proved to
be effective in generating a conversa-
tional setting in which all participants
had opportunities to lead or respond to
其他的.
The Concordia Laptop Orchestra
(CLOrk) is an ensemble of 20–25
laptop performers that operates in
the framework of a university course
for electroacoustic music majors at
Concordia University in Montreal.
It was established by Eldad Tsabary
在 2011 with a curriculum built
around highly participatory planning,
生产, and realization of inter-
disciplinary and networked laptop
orchestra performances, 包括
collaborations with a symphonic
orchestra, jazz and chamber ensem-
布莱斯, other laptop orchestras, dancers,
VJs, actors, and various soloists.
CLOrk performances are typically
used as opportunities to investigate
and explore new aesthetic, perfor-
mative, conceptual, 技术性的,
社会的, and educational possibili-
领带. Every performance serves as a
research-creation platform for ad-
vancing the practice of digital music
performance and our understanding
thereof.
5. Xenoglossia/Leishmania—
Bill Hsu (交互的
animation), Christopher
Burns (live electronics)
Xenoglossia/Leishmania is a struc-
tured audiovisual improvisation,
utilizing live electronics and inter-
active animations. Video is projected
on stage, above and behind the mu-
西人. The musical and visual
performances are highly interdepen-
凹痕, guided together through the
actions of the performers, automated
real-time analysis of the audio, 和
the exchange of networked messages
between the audio and animation
系统.
The Xenoglossia audio software fa-
cilitates high-level control of complex
polyphonic output. The performer
initiates multiple simultaneous gen-
erative processes, each with distinct
gestural and textural content, 然后
controls their continuation and de-
发展. The software provides
the ability to alter and reshape the
ongoing processes along dimensions
including pitch, rhythm, 音色, 和
rate of evolution. The performer
can also clone and reproduce the
behavior of interesting sonorities and
textures, and shape the large-scale
form of the performance using tools
that generate contrast, variation, 和
synchronization between processes.
Leishmania is an interactive an-
imation environment that visually
resembles colonies of single-cell or-
ganisms in a fluid substrate. 每个
cell-like component has hidden ini-
tial connections to and relationships
with other components in the envi-
罗门特. The colonies evolve and
“swim” through the substrate, 基于
on a combination of colonial struc-
ture and inter-relationships and flows
in the fluid substrate that might be
initiated by gestural input. Protean,
organic-looking shapes emerge and
evolve in the system in a highly
声音和视频选集: 节目说明
121
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
C
哦
米
j
/
我
A
r
t
我
C
e
–
p
d
F
/
/
/
/
3
8
4
1
1
9
1
8
5
6
9
7
6
/
C
哦
米
_
A
_
0
0
2
7
4
p
d
.
j
F
乙
y
G
你
e
s
t
t
哦
n
0
7
S
e
p
e
米
乙
e
r
2
0
2
3
unpredictable manner; the colonies
alternately congeal into relatively
well-defined forms, or disperse into
chaos. The system resembles an
abstract painting environment; A
gestural interface sets the fluid sub-
strate in motion and influences the
behavior of the colonies of cell-like
成分.
These two systems communicate
with one another in a variety of
方法. The animation is influenced by
the real-time analysis of audio from
Xenoglossia. High-level tempo, spec-
tral, and other features are extracted
and sent via Open Sound Control to
the animation environment. Simple
and overly obvious mappings of sound
to visual parameters are avoided, 但,
as can be observed in the video
clips provided later, the audio clearly
affects the overall coherence and
behavioral trends of the colonies.
The systems also exchange mes-
sages over a network interface.
Xenoglossia conveys information
about phrase-level timing and formal
evolution to the animation environ-
蒙特. 反过来, Leishmania sends
visual descriptors regarding the den-
sity and position of cell clusters to
Xenoglossia, influencing the rhyth-
mic density, sonic character, 和
coordination of audio layers. 这
result is a closed loop of high-level
descriptive information between the
two systems. 因此, we are impro-
vising with our respective generative
系统; 此外, each system
monitors and is influenced by the
behavior of the other.
Bill Hsu works with electronics
and real-time animation systems.
He is interested in complex gener-
ative systems, inspired by natural
流程, that interact with live per-
前者. He has built systems, 工具,
installations and compositions in
collaboration with Peter van Bergen,
Chris Burns, 约翰·布彻, James Fei,
Matt Heckert, Lynn Hershman, Paula
莱文, Jeremy Mende, and Gino
Robair. He has recently performed
and presented work at the Blurred
Edges Festival 2014 (汉堡), Zero
One Garage (圣约瑟夫), Yerba Buena
Center for the Arts (旧金山),
San Francisco Electronic Music Fes-
tival 2013, ACM Creativity and
认识 2013 (悉尼), and NIME
2013 (Daejeon and Seoul). He teaches
and does research in the Department
of Computer Science at San Francisco
州立大学.
Christopher Burns is a composer and
improviser developing innovative
approaches to musical architecture.
His work emphasizes trajectory, 躺着-
ering and intercutting a variety of
audible processes to create intricate
形式. The experience of density is
also crucial to his music: His com-
positions, which often incorporate
materials that pass by too quickly to
be grasped in their entirety, 展示
complex braids of simultaneous lines
and textures. Several recent projects
incorporate animation, choreography,
and motion capture, integrating per-
formance, 声音, and visuals into a
unified experience.
Burns’ work as a music tech-
nology researcher shapes his work
in both instrumental chamber mu-
sic and electroacoustic sound. 他
writes improvisation software incor-
porating a variety of unusual user
interfaces for musical performance
and exploring the application and
control of feedback for complex and
unpredictable sonic behavior. 在
the instrumental domain, he uses
algorithmic procedures to create
distinctive pitch and rhythmic struc-
tures and elaborate them through
时间. Burns is also an avid archae-
ologist of electroacoustic music,
creating and performing new digital
realizations of music by Cage, Ligeti,
Lucier, Stockhausen, 和别的.
His recording of Luigi Nono’s La Lon-
tananza Nostalgica Utopica Futura
with violinist Miranda Cuckson was
named a “Best Classical Recording of
2012” by The New York Times.
A committed educator, Burns
teaches music composition and tech-
nology at the University of Wisconsin,
Milwaukee. Previously, he served as
the Technical Director of the Center
for Computer Research in Music
and Acoustics (CCRMA) at Stanford
大学, after completing a doc-
torate in composition there in 2003.
He has studied composition with
Brian Ferneyhough, Jonathan Harvey,
Jonathan Berger, Michael Tenzer, 和
Jan Radzynski.
Burns is also active as a con-
cert producer. He co-founded and
produced the Strictly Ballroom con-
temporary music series at Stanford
University from 2000 到 2004, 并且有
contributed to the sfSound ensemble
in the San Francisco Bay Area since
2003. 自从 2006, he has served as the
artistic director of the Unruly Music
festival in Milwaukee.
Part B – Musical Metacreation
Curator’s Note The “musical
metacreation” concert event was
recorded by Cafe OTO, and received
funding from the Design Lab at the
University of Sydney. It was further
supported by NIME 2014 (金子-
smiths) as a satellite event, which fed
into a musical metacreation work-
shop presented at NIME by Brown,
Eigenfeldt, and Philippe Pasquier.
1. Paul Hession—drums,
Isambard
Khroustaliov—software
Being inside this cyclotron of
atomized information from my
122
电脑音乐杂志
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
C
哦
米
j
/
我
A
r
t
我
C
e
–
p
d
F
/
/
/
/
3
8
4
1
1
9
1
8
5
6
9
7
6
/
C
哦
米
_
A
_
0
0
2
7
4
p
d
.
j
F
乙
y
G
你
e
s
t
t
哦
n
0
7
S
e
p
e
米
乙
e
r
2
0
2
3
own vantage point produces a pal-
pable sense of vertigo. A feeling
that it could be anything in any
order by anyone at any time for
any reason. Everything pointing
in all directions quaquaversally
but arriving at no destination.
And its effect is a cancellation of
影响. A feeling like Baudrillard’s
screen stage of blank fascination
has reached its terminal phase
and all previous depths are col-
lapsing into an endless vista of
dazzling surface play.
—Eric Lumbleau of Mu-
tant Sounds, quoted online
at www.theawl.com/2012/11/
the-rise-and-fall-of-obscure-music-
blogs-a-roundtable. The piece em-
ploys a computer model of a penguin,
some cellular automata, and analysis-
driven concatenative synthesis to
manifest and interrogate this mal
d’archive.
2. The Indifference Engine
versus Paul Hession (软件
by Arne Eigenfeldt)
My software is often built around
the concept of negotiation, 其中
virtual musical agents attempt to
come to some understanding in
terms of what they want to achieve
musically, and how they try to get
那里. This can be translated into
the notion of desires and intentions.
In this particular work, the virtual
agents have to deal with a Paul
Hession, who has his own desires
和意图, unknown to them.
The agents must decide whether
to try to follow the live performer,
or continue with their own plans.
To make things more complicated,
each agent is given only a short
“view” of the outside world (a quarter
第二, every two seconds) 为了
form their individual beliefs of what
the performer is doing. Since these
beliefs will often be contradictory, 这
agents end up spending a lot of time
arguing, resulting in the occasional
indifference to the live performer.
3. Paul Hession—drums, Doug
Van Nort—FILTER system
This piece presents the Freely Im-
provising Learning and Transforming
Evolutionary Recombination (FIL-
TER) 系统, in an improvised duo
with percussionist Paul Hession.
The project explores themes such as
sonic gestural understanding, stylis-
tic tendencies, textural shifts and
transformations of the lived episodic
memory as it develops in the mo-
ment of performance. The work was
born from a desire to reflect upon,
and perhaps model, my own hu-
man performance practice with my
Granular-feedback Expanded Instru-
ment System (GREIS), wherein I often
capture and transform the musical
streams from other performers on the
fly.
4. Zamyatin (software by Oliver
棕色的) with Finn Peters (sax)
Zamyatin is part of an ongoing study
into software systems that act in per-
formance contexts with autonomous
qualities. The system comprises an
audio analysis layer, an inner control
system exhibiting a form of com-
plex dynamical behavior, and a set
of “composed” output modules that
respond to the patterned output from
the dynamical system. The inner sys-
tems consists of a bespoke “Decision
Tree” that is built to feed back on
本身, maintaining both a responsive
behavior to the outside world and
a generative behavior, driven by its
own internal activity. The system
has been evolved using a database of
previous work by the performer, 到
find interesting degrees of interac-
tion between this responsivity and
internal generativity. Its output is
“sonified” through different output
模块, mini generative algorithms
composed by the author. Zamyatin’s
name derives from the Russian au-
thor whose dystopian vision included
machines for systematic composi-
tion that removed the savagery of
human performance from music.
Did he ever imagine the computer
music free-improv of the early 21st
世纪?
5. Finn Peters—sax, Nick
Collins—FinnSystem
This is the second outing for FinnSys-
TEM, a live musical agent originally
born on 14 四月 2012. The agent was
educated on a corpus of Finn Peters’
sax and flute playing. While Finn will
have developed new techniques in
the intervening two years, 系统
remains frozen on an earlier version
of himself; so Finn will be encoun-
tering the agent at an interesting
remove via a previous iteration of
他自己.
6. Finn Peters—sax, Shlomo
Dubnov and Greg
Surges—software
This work explores a novel type of
interaction between a live musician
and a computer that was pre-trained
to improvise on a known, 不同的
piece of music. While each partner
in the human-machine duo is free
to improvise on its own materials,
they both listen to each other, com-
ing in and out of sync and creating
a human-machine musical dialog
in a dynamic and often unexpected
mechanically driven plot. 这件作品
is the next step in the development
声音和视频选集: 节目说明
123
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
C
哦
米
j
/
我
A
r
t
我
C
e
–
p
d
F
/
/
/
/
3
8
4
1
1
9
1
8
5
6
9
7
6
/
C
哦
米
_
A
_
0
0
2
7
4
p
d
.
j
F
乙
y
G
你
e
s
t
t
哦
n
0
7
S
e
p
e
米
乙
e
r
2
0
2
3
of the Audio Oracle method that
adds a “listening” component to the
improvisation process. The Audio
Oracle analyses repetitions in music
and uses them to create variations
in the same style. 而且, 期间
the improvisation, the computer tries
to match its choice of improvisa-
tion materials to those of the live
musician. From time to time, 这
computer also imitates the live mu-
sician by mirroring the ambiguity of
his or her style, thus altering between
sections of contrasting dialog and a
machine-augmented “imitative” solo
表现.
[This work is based on research
on stylistic modeling carried out by
Gerard Assayag and Shlomo Dubnov
and on research on improvisation
with the computer by G. Assayag,
中号. Chemillier, G. Bloch, and Arshia
Cont (aka the OMax Brothers) 在
the Music Representations Group at
l’Institut de Recherche et Coordina-
tion Acoustique/Musique (IRCAM).]
7. piano prosthesis—Michael
Young
This is one of a developing series
of duos for a human and a machine
performer. Both “musicians” adapt to
each other through mutual listening
(IE。, via audio only) 和回应
as the performance develops. 这
human’s improvisation is encoded
by the computer through statistical
analysis of extracted features and by
cataloguing these in real time. 每个
observation made by the computer is
assigned to a set of musical output
behaviors. Recurring features of the
player’s improvisation can then be
recognized by the computer. 这
machine “expresses” this recognition
by developing, and modifying, its own
musical output, just as another player
可能.
8. Finn Peters/Paul Hession/the
Matt Yee-King simulator
The Matthew Yee-King simulator
attempts to model and reproduce the
improvisational behavior of Matthew
Yee-King. The performance begins
with the real Matthew manipulating
two sampling machines and a set of
effects implemented in the Super-
Collider environment, controlled via
an Akai MPD24 MIDI controller. A
probabilistic model of the sequence
of control data he generates is built in
real time. When Matthew is satisfied
that he has demonstrated a range of
interesting and appropriate control
data patterns to the system, he flicks
the system to “generate” mode and
steps away. The model is then used
to autonomously control the sam-
plers and effects for the rest of the
表现.
Part B Bios
Paul Hession (drums) 出生于
Leeds in 1956. He took up drum-
ming at the age of 15 and since then
has played and broadcast in many
European and Scandinavian coun-
tries as well as Argentina, 墨西哥,
Cuba, the USA, 和加拿大. 他有
played with many of the major fig-
ures on the free music scene, 这样的
as Peter Br ¨otzmann, Derek Bailey,
Evan Parker, Lol Coxhill, Sunny Mur-
ray, Marshall Allen, Frode Gjerstad,
Peter Kowald, Joe McPhee, Borah
伯格曼, Otomo Yoshihide, 和他的
old friends Alan Wilkinson, 西蒙
Fell, Mick Beck, Hans-Peter Hiby,
Petter Frost-Fadnes, and Rus Pearson.
Collaborators from a different scene
are Squarepusher and DJ/producer
Paul Woolford. He is known to relish
the interaction of collective music-
制作, but also responds to the
challenge of solo performance.
Finn Peters (sax, flute) 已经起作用了
with such pioneers as Frederick
Rzewski, Bill Frisell, DJ Spinna, 山姆
Rivers, and Sa-Ra creative partners.
He has been involved in upwards of
200 recordings for other artists, 和
has released a number of his own
录音. In the words of Straight
No Chaser magazine, he is “the
blazing definition of a seriously heavy
player.” Awards and recognition
include the London Young Jazz
Musician Award, the BBC Jazz
Awards Best Band, the Jerwood Rising
Stars program, a nomination for the
Paul Hamlyn Composition Award,
and the Radio 1 Worldwide Awards
“Best Session” category. Throughout
2010 Peters worked on a new electro-
acoustic project entitled “Music of
the Mind” which deals with brain
waves in music and new forms of
algorithmic composition and impro-
visation. The album was described by
the Independent (伦敦) as “nothing
like you have ever heard before.”
Isambard Khroustaliov is the solo
alias of electronic musician and com-
poser Sam Britton from the groups
Icarus, Fiium Shaarrk, and Leverton
狐狸. Britton trained as an architect
at the Architectural Association in
London but took up music after se-
curing a recording contract as an
undergraduate. 自从 1997 he has
recorded and released music for a se-
ries of independent electronic music
labels in the UK and the US (输出
Recordings, Temporary Residence,
Domino and The Leaf Label, 之中
其他的) and performs internationally
with his various groups, solo, 并在
collaboration with numerous impro-
vising musicians and ensembles. 在
2006 he completed a masters course
in electronic music and composi-
tion at IRCAM in Paris and in 2011
worked with the London Sinfonietta
as part of their Writing the Future
commissioning scheme.
124
电脑音乐杂志
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
C
哦
米
j
/
我
A
r
t
我
C
e
–
p
d
F
/
/
/
/
3
8
4
1
1
9
1
8
5
6
9
7
6
/
C
哦
米
_
A
_
0
0
2
7
4
p
d
.
j
F
乙
y
G
你
e
s
t
t
哦
n
0
7
S
e
p
e
米
乙
e
r
2
0
2
3
Arne Eigenfeldt is a composer of
live electroacoustic music and a re-
searcher into intelligent generative
music systems. His music has been
performed around the world, 和他的
collaborations range from Persian
tar masters to contemporary dance
companies to musical robots. 他有
presented his research at conferences
and festivals such as ICMC, SMC,
ICCC, EMS, EvoMusArt, GECCO,
and NIME. He teaches music tech-
nology at Simon Fraser University
and is the co-director of the Metacre-
ation Agent and Multi-Agent Systems
(MAMAS) lab.
Doug Van Nort is a sonic artist and
researcher whose work is concerned
with the complex and embodied na-
ture of listening, improvisation both
with and by machines, and the phe-
nomenology of time consciousness
and of collective co-creation. 他的
research takes the form of scholarly
writings on these phenomena, com-
posed and improvised electroacoustic
音乐, pieces of sound-focused art,
and digital artifacts designed and
developed in these pursuits. 货车
Nort’s work is a synthesis of his
background in mathematics, 媒体
艺术, music composition, and perfor-
曼斯. Van Nort has recently joined
the School of Arts, 媒体, Perfor-
mance and Design at York University
in Toronto, continuing his work
in digitally mediated performance.
He often performs solo as well as
with a wide array of artists spanning
musical styles and artistic media.
Regular collaborators include Pauline
Oliveros and Al Margolis, and he
also works as a member of the Com-
posers Inside Electronics. His music
appears on several labels (例如, Pogus,
Deep Listening, Attenuation Circuit,
and Zeromoon), and his writings on
sound/performance/electroacoustics
have been published by a number
of outlets (例如, Organised Sound,
Leonardo Music Journal, and Jour-
nal of New Music Research). 看
www.dvntsea.com.
Ollie Bown is a researcher, 程序-
梅尔, and electronic music maker. 他
creates and performs music as one
half of the duo Icarus, and he performs
regularly as a laptop improviser in
electronic and electroacoustic ensem-
布莱斯. He has worked with musicians
such as Tom Arthurs and Lothar
Ohlmeier of the Not Applicable
Artists, Brigid Burke, Adem Ilhan, Pe-
ter Hollo, and Adrian Lim-Klumpes.
Bown has designed interactive sound
for installation projects by Squidsoup
and Robococo, at venues such as the
Powerhouse Museum in Sydney, 这
Oslo Lux, the Vivid Festival, 悉尼,
and the Kinetica Art Fair, 伦敦. 在
his research role he was recently local
co-chair of the 2013 国际的
Conference on Computational Cre-
ativity and is on the organizing
committee of the Musical Metacre-
ation Workshop and events series.
Nick Collins is Reader in Compo-
sition at Durham University. 他的
research interests include live com-
puter music, musical artificial intel-
智慧, and computational musicol-
奥吉, and he is a frequent international
performer as composer-programmer-
pianist, from algoraves to electronic
chamber music. He co-edited The
Cambridge Companion to Electronic
音乐 (剑桥大学出版社,
2007) and The SuperCollider Book
(与新闻界, 2011), wrote the Intro-
duction to Computer Music (威利
2009), and co-wrote Electronic Music
(Cambridge University Press Intro-
ductions series, 2013). 有时,
he writes in the third person about
他自己, but he is trying to give it up.
Shlomo Dubnov is a Professor in
Music and Computer Science at the
加州大学, 圣地亚哥
(UCSD). His main research is on
applying statistical and machine
learning techniques to the modeling
of music, 故事, and entertainment
媒体. His work on computational
modeling of style and computer au-
dition has led to development of
several computer music programs for
improvisation and machine under-
standing of music. Dubnov studied
composition and computer science in
Jerusalem and served as a researcher
at IRCAM in Paris. He currently
directs the Center for Research in En-
tertainment and Learning (CREL) 在
UCSD’s Qualcomm Institute (Calit2)
and serves as a lead editor in ACM
Computers in Entertainment.
Greg Surges makes electronic music,
软件, and hardware. His work has
been released on the various labels,
and his research and music have
been presented at multiple festivals
and conferences. He is currently a
PhD student at the University of
加利福尼亚州, 圣地亚哥. Previously, 他
earned a MM in Music Composition
and a BFA in Music Composition
and Technology at the University of
威斯康星州, Milwaukee.
Michael Young is a composer and
researcher currently based at Gold-
smiths, 伦敦大学, 和
will soon take up the post of Pro-Vice
Chancellor (Teaching and Learning)
at De Montfort University. He is co-
founder of the EPSRC-funded “Live
Algorithms for Music” network 2004,
which investigates autonomous sys-
tems for live music creation. He stud-
ied at the Universities of Oxford and
达勒姆. The “ prosthesis” series has
been developing since 2007 并在-
cludes versions for clarinet, trio, flute,
oboe, and piano. Chris Redgate’s latest
CD release (Electrifying Oboe, Metier
Records) includes two versions of
“oboe prosthesis.” For more info and
声音的, visit www.michaelyoung.info.
声音和视频选集: 节目说明
125
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
C
哦
米
j
/
我
A
r
t
我
C
e
–
p
d
F
/
/
/
/
3
8
4
1
1
9
1
8
5
6
9
7
6
/
C
哦
米
_
A
_
0
0
2
7
4
p
d
.
j
F
乙
y
G
你
e
s
t
t
哦
n
0
7
S
e
p
e
米
乙
e
r
2
0
2
3
Matthew Yee-King is a lecturer in
creative computing at Goldsmiths
College as well as a computer music
作曲家, performer, 和研究员.
His work covers a range of styles,
from the use of agent-based live im-
provisers to regular electronic music.
Recent activities include chairing
the workshops at the 2012 Supercol-
lider Symposium, including a live
algorithm hackathon, and extensive
involvement in the Arts Council–
funded Music of the Mind project
alongside composer Finn Peters. 他
has been involved in significant pub-
lic engagement activities, presenting
his arts/science crossover projects at
Science Festivals around the UK, 作为
well as on national television and
收音机. He has performed live inter-
nationally and nationally as well
as recording many sessions for BBC
收音机. In the past his solo music has
been released on electronic music
imprints such as Warp Record James’
Rephlex Records. Past collaborators
include Jamie Lidell, Tom Jenkinson
(Squarepusher), Finn Peters, and Max
de Wardener.
Supplementary Audio/Video
Examples for Articles
from CMJ 38:4
1. An Intuitive Synthesizer of
Continuous Interaction
Sounds: Rubbing, Scratching,
and Rolling—Simon Conan,
艾蒂安·索雷特, Mitsuko
Aramaki, Olivier Derrien,
Charles Gondre, 理查德
Kronland-Martinet, 和
Sølvi Ystad
In this video, an intuitive synthesizer
for rubbing, scratching, and rolling
interaction sounds is presented. 这
real-time synthesizer allows the user
to independently control the different
types of interaction and to morph
它们之间, and to control the
properties of the object, such as shape
and material and to morph between
the material categories. In the first
part of the video, the intuitive control
of the properties of an impacted,
resonant object is presented. 在
the second section, the intuitive
control of the interaction part is
证明了, by controlling the
synthesizer with a graphical tablet.
2. Cellular Automata
Histogram Mapping
Synthesis—Jaime Serquera
and Eduardo Reck Miranda
See the article’s appendix for descrip-
tion of the provided examples.
3. Sound Synthesis of a
Gaussian Quantum Particle
in an Infinite Square—
Rodrigo F. Cadiz and Javier
Ramos
Audio Example 1
第一的 90 seconds of the spectrogram of
the full revival time. The spectrum
retakes its initial form towards the
end of the revival time, as predicted
by Equation 29, and exhibits a mirror
revival at half the revival time, 作为
predicted by Equation 31. (Please
refer to Figures 7 和 8.)
Audio Example 2
Spectrogram of a single bounce. 这
slopes of the frequency bands change
according to the direction of the
wavepacket. When a bounce occurs,
approximately at time t = 7, a change
in slope happens. (Please refer to
数字 9.)
Audio Example 3
Spectrogram for a linear increase
in α from 1.7 到 400. The initial
frequency band gets narrower around
the value associated with p0, 在这个
case approximately 3200 赫兹. (Please
refer to Figure 10.)
Audio Example 4
Spectrogram for a linear increase
in the mass from 2 到 2.06. 这
phase bands get further apart in
frequency as the wave packet’s group
velocity diminishes. (Please refer to
数字 11.)
Audio Example 5
Spectrogram for a linear increase in
the initial momentum from −0.2 to
0.43. The frequency band around p
moves along with it, from a center
的频率 70 Hz in the case p =
−0.2 to 3,500 Hz when p = 0.43.
(Please refer to Figure 12.)
Audio Example 6
Spectrogram for a linear increase in
the length of the well in steps of
600, 700, 和 800, at times t = 0,
t = 2, and t = 4. The frequency band
gets narrower and moves towards
the lower frequencies. As changing
the length of the well implies a
full recalculation of the quantum
particle’s dynamics, audible clicks
appear in the sound signal when the
length is changed in real time. (Please
refer to Figure 13.)
Audio Example 7
Spectrum for linear increase of N
从 0 到 60. The cutoff frequency
depends directly on N. Because N is an
integer in this implementation, clicks
126
电脑音乐杂志
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
C
哦
米
j
/
我
A
r
t
我
C
e
–
p
d
F
/
/
/
/
3
8
4
1
1
9
1
8
5
6
9
7
6
/
C
哦
米
_
A
_
0
0
2
7
4
p
d
.
j
F
乙
y
G
你
e
s
t
t
哦
n
0
7
S
e
p
e
米
乙
e
r
2
0
2
3
are produced when this parameter is
changed continuously. (Please refer to
数字 14.)
Audio Example 8
Spectrum for change in the mass from
0 到 1 (at time t = 2) and back to 0
(at time t = 13). The ordering of the
frequencies is affected when the
mass is varied around small values.
The behavior of the momentum
distribution is chaotic when the mass
is near zero, and it is very organized
and similar to a sawtooth signal when
the mass is near one. (Please refer to
数字 15.)
4. 声音合成
听觉失真
Products—Gary S. 肯德尔,
Christopher Haworth, 和
Rodrigo F. Cadiz
Please see the article’s appendix for
description of the provided examples.
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
C
哦
米
j
/
我
A
r
t
我
C
e
–
p
d
F
/
/
/
/
3
8
4
1
1
9
1
8
5
6
9
7
6
/
C
哦
米
_
A
_
0
0
2
7
4
p
d
.
j
F
乙
y
G
你
e
s
t
t
哦
n
0
7
S
e
p
e
米
乙
e
r
2
0
2
3
声音和视频选集: 节目说明
127
下载pdf