Kerry L. Hagan

Kerry L. Hagan
Digital Media and Arts Research Centre
CS 2011
University of Limerick
Limerick, Ireland
Kerry.Hagan@ul.ie

Textural Composition:
Aesthetics, Techniques,
and Spatialization for
High-Density Loudspeaker
Arrays

Abstrakt: This article documents a personal journey of compositional practice that led to the necessity for working
with high-density loudspeaker arrays (HDLAs). I work with textural composition, an approach to composing real-time
computer music arising from acousmatic and stochastic principles in the form of a sound metaobject. Textural
composition depends upon highly mobile sounds without the need for trajectory-based spatialization procedures. In
this regard, textural composition is an intermediary aesthetic—between “tape music” and real-time computer music,
between sound objects and soundscape, and between point-source and trajectory-based, mimetic spatialization.

I begin with the aesthetics of textural composition, including the musical and sonic spaces it needs to inhabit. ICH
then detail the techniques I use to create textures for this purpose. I follow with the spatialization technique I devised
that supports the aesthetic requirements. Endlich, I finish with an example of an exception to my techniques, eins
where computational requirements and the HDLA required me to create a textural composition without my real-time
strategies.

Textural Composition and Its Aesthetics

Textural composition is the practice of composing
primarily with sonic texture over other musical ele-
ments to create a particular experience of space and
Zeit. Speziell, a textural composition attempts
to give the listener a large, immersive experience of
sound at slow, environmental movements of time.
The philosophy of textural composition combines
the aesthetics of the sound object from acousmatic
music with large sound masses inspired by Iannis
Xenakis. The following sections describe the nature
of sound masses in textural composition and the
incumbent spatialization strategies to engender the
musical and acoustic space. They summarize points
and arguments made in greater detail elsewhere
(Hagan 2008a, 2008B). I will highlight the conclu-
sions and the main reasons behind them.

Sound Objects

Texture in music is a distinctly useful metaphor for
the experience and construction of certain musics,
especially when used in opposition to gesture. A
gesture is experientially short, with a perceptible

Computermusikjournal, 41:1, S. 34–45, Frühling 2017
doi:10.1162/COMJ a 00395
C(cid:2) 2017 Massachusetts Institute of Technology.

start, Dauer, and end. It has directionality and
impetus. It lasts for a human length of time,
to paraphrase Denis Smalley (1997). Umgekehrt,
texture becomes about inner detail at the expense
of a direction and momentum, existing in what
Smalley calls environmental time.

The literal definition of texture can inform and
suggest musical treatment. Texture is a character-
istic of an object, requiring a substrate on which
to exist. In focusing on texture, the object itself
becomes irrelevant. The size and shape of the object,
Zum Beispiel, no longer provides significant infor-
mation to the observer. Musically, this suggests
that the sound object itself must change to enforce
awareness of its texture, rather than its size and
shape, edges and boundaries, or the placement of the
sound object in the world.

The acousmatic sound object manifests in a
work through spectromorphological boundaries,
separating it in timbre and in time from other sound
Objekte. As a musical building block, the sound
object has become a crucial tool for compositional
devices in acousmatic composition. Putting aside
the space of a sound object for a moment, Die
perception, Manipulation, and development of sound
objects become the form and catalyst of acousmatic
funktioniert. Imagine, Jedoch, that an acousmatic work
is frozen in a moment of its life, exposing a single
sound object to scrutiny. The sound object is then

34

Computermusikjournal

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u
/
C
Ö
M

J
/

l

A
R
T
ich
C
e

P
D

F
/

/

/

/

4
1
1
3
4
1
8
5
6
6
7
3
/
C
Ö
M
_
A
_
0
0
3
9
5
P
D

.

J

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

substantially magnified so that the listener’s entire
perspective is of nothing but the sound object in
that moment.

The texture metaphor allows for this process:
focusing attention on nothing but the inner detail
of something that is existing outside of time and
larger than the space a listener experiences. In other
Wörter, there must be a sound metaobject. It retains
its identity by way of its inner details, aber die
metaobject is so large that it subsumes the listener.
An aside: One approach to focusing on inner de-
tails comes from drone music, where long, sustained
tones change subtly over time, inducing a listener to
target on microscopic changes. Unlike textural com-
Position, Jedoch, drone music does not necessarily
expand the imagined size of the sound. Textu-
ral composition requires multitudinous, smaller
sounds that reside in and on the expansive metaob-
ject, like the grit on sandpaper. This facilitates the
perception of the metaobject’s magnitude. It is made
up of many miniature components, which are now
heightened to apperception. This idea is congru-
ent with Natasha Barrett’s argument that density,
texture, and amplitude contribute significantly to
“implied spatial occupation” (Barrett 2002, P. 316).
The richness of the texture, its long duration, Und
its stochastic asynchronicity also contribute to its
perceived volume (Truax 1998).

When this happens, a new experience emerges,
one that approaches what R. Murray Schafer called
a “lo-fi” soundscape (Schafer 1994). In this sound-
scape, sounds become too congested and innu-
merable to be individuated, and they impart an
impression of a circumambient acoustic scene.
Aber, textural composition does not entirely
become a lo-fi soundscape, entweder. It exists on
a fragile edge between countless sound objects
and the lo-fi soundscape. This is one reason I
have characterized textural composition as an
intermediary aesthetic, music living between sound
objects and soundscape.

Space and Textural Composition

“Space in sound” (Timbre) and “sound in space” (dif-
fusion) are fundamental aspects of acousmatic music

(Truax 1998), and perhaps in most music. It follows
that the presence of a sound metaobject requires
both large musical space (imagined through musical
perception) and immersive, acoustic space (created
in der Welt). Spatialization must accommodate the
size of the metaobject, assert the complete envel-
opment of the listener, and convey the complexity
of the inner detail. Infolge, there are four im-
portant, interrelated aspects to the aesthetics of the
spatialization: 1) multichannel surrounding sound,
2) a lack of “front” in perspective, 3) an unbounded
sense of space (both acoustically and musically), Und
4) significant spatial motion.

In enveloping sound, it is clear that only multi-
channel arrays can create the surround experience.
Aus diesem Grund, textural composition requires a
minimum of eight channels to be effective. More
speakers, especially elevated speakers, enhance
the experience, so high-density loudspeaker arrays
(HDLAs) become crucial spaces for textural com-
Position (Harrison 1998; Rolfe 1999; Sazdov, Paine,
and Stevens 2007; Otondo 2008; Normandeau 2009).
Gleichzeitig, there are arrays with large
numbers of speakers that would not suit textural
Komposition. In some arrays, especially those
designed for live diffusion, speakers are concentrated
at the front of the audience. Smalley (2007) defines
a number of acoustic spaces around listeners. In
acousmatic music, prospective space, the front
view for the listeners, is the main arena for the
acousmatic image. Other spaces exist, defined by
the relationship of the listener and the imagined
“view” of the music. In these spaces and prospective
Raum, the listener is outside the sound, viewing
it as it moves and turns in space. Because the
listener must be inside the sound metaobject, diese
spaces defeat perception of the metaobject. Dort
is one space defined by Smalley that is conducive
to textural composition: immersive space, a kind
of “circumspace.” Immersive space is “where the
spectral and perspectival space is amply filled,
surrounding egocentric space, where the pull of
any one direction does not dominate too much,
and where the listener gains from adopting, Und
is encouraged to adopt, different vantage points”
(Smalley 2007, P. 52). Aus diesem Grund, there should be
no frontal perspective dominating the environment.

Hagan

35

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u
/
C
Ö
M

J
/

l

A
R
T
ich
C
e

P
D

F
/

/

/

/

4
1
1
3
4
1
8
5
6
6
7
3
/
C
Ö
M
_
A
_
0
0
3
9
5
P
D

.

J

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

The sound metaobject must be unbounded
imaginatively and acoustically. daher, textural
composition must use unbounded space, sogar
though bounded space is far easier to achieve.

The listener’s point of view is one way in which

space is bounded. There is little a composer can
do to encourage a nondirectional viewpoint. Der
material can suggest multiple perspectives by
saturating the circumspace to create immersive
Raum. Alternativ, the composer can invite
the audience to move through the room in the
Leistung. The latter is met with difficulty in
many performance spaces, Jedoch. daher, A
konsistent, unoriented approach to spatialization
unbinds the limited perspectival space, Werden
circumspace.

Discernible, individual sounds can also bound
space by articulating it through movement and
psychoacoustic triggers. The nature of textural
Komposition, that edge between sound objects and
soundscape, alleviates this problem, as it does not
have individuated objects moving through space.

Too many sounds can also contract a space. Hier

I draw a subtle distinction between a sound mass
and a sound wall. Inspired by much of Xenakis’s
Musik, my work in textural composition deals
with large sound masses. Masses can be dense,
thin, transparent, or dark—they are nonspecific. A
sound wall, andererseits, implies a heavy,
impenetrable, and bounding space. A sound mass
can surround a listener, appearing to extend beyond
the horizon. A sound wall blocks space beyond its
perimeter. Though a sound wall can have texture,
only a sound metaobject is unbounded. Also, textural
composition is about sound masses, not sound walls.
Spatially, this means that textures extend distally,
as well as enveloping the listener.

There is nothing inherently objectionable to static
textures in textural composition. I found, Jedoch,
that when texture becomes the primary component
of a work, then it must take on dynamic, changing
processes. This may occur in the sonic nature of
the texture itself. Wichtiger, I discovered
that mobile textures enhance textural composition.
Electroacoustic music is predominantly engaged
with mimetic spatialization, das ist, the mimicry
of actual sounds in space. Sometimes it capitalizes

on surrealism by making identifiable sound objects
move faster, higher, longer, usw., than might be their
natural circumstance. Imagine an airplane flying
around like an insect. This mimetic spatialization
dissimulates the loudspeaker to create a perception
of space between speakers, z.B., the work of Ambrose
Field (Austin 2001). But this mimicry means that
there is a gesture in space, which defeats textural
Komposition. An alternative approach is point-
source spatialization. Point-source spatialization
imbues each loudspeaker with musical agency
as performers (Verbrennungen 2006). This promotes the
circumspace, as each speaker contributes equally to
the experience.

This precipitates another aspect of spatialization

in textural composition, perhaps the most mean-
ingful to the aesthetics of spatialization. Textural
composition needs to have dynamic, active, exagger-
ated movement without coherent trajectories. Dort
is thus a balance between the hidden speakers of
mimetic spatialization and the performer-speaker in
point-source spatialization. Colby Leider identifies
these extremes as the “dual nature” of loudspeakers
(Leider 2007, P. 1891), but I believe that spatializa-
tion can exist on a fragile boundary between both,
another intermediary aesthetic.

There is an additional, fifth aspect of textural

spatialization not mentioned earlier because it
is an element of my larger philosophy: real-time
spatialization. My concerns regarding real-time
generation of music are expounded in the next
section. Harrison (1998) and Smalley (Austin 2000)
assert that space as a central aspect in music-making
is mostly unique to acousmatic practice. I disagree
with this, but the way in which they describe
sound in space informs my approach, even if as
an extension to acousmatic aims. The real-time
diffusion in acousmatic music is paralleled in the
real-time spatialization of textural composition.

Creating Textures

In diesem Abschnitt, I describe the way in which I cre-
ate textures for two pieces, each using different
Methoden. Both use the same spatialization tech-
nique, described in the next section. The first piece,

36

Computermusikjournal

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u
/
C
Ö
M

J
/

l

A
R
T
ich
C
e

P
D

F
/

/

/

/

4
1
1
3
4
1
8
5
6
6
7
3
/
C
Ö
M
_
A
_
0
0
3
9
5
P
D

.

J

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

Real-Time Tape Music III (2008), manipulates sound
files for texture sources. Das zweite Stück, Mor-
phons and Bions (2011), uses noise-based synthesis
techniques for its material.

Real-Time Tape Music III

The “Real-Time Tape Music” series evolved from
discussions I had with many composers and from
trends I saw in festivals and conferences. Auf der
one hand, some computer music composers felt
that real-time computer music was the obvious
solution to “dead” fixed-medium works. Auf der
andererseits, composers working in fixed media—
nämlich, acousmatic composers—similarly felt that
working with recorded sounds was the apparent
solution to “dead” computer music. Fixed-medium
works were considered dead because they were
unchangeable regardless of venue or audience.
Computer music was considered dead because the
sounds were lifeless and uninteresting compared to
real-world sounds. I have oversimplified both the
arguments and solutions on each side, but this was
my initial inspiration. I wanted to create music that
used real-world sounds, and I processed them with
classical tape techniques in real-time. Anfänglich, ICH
felt I was bridging a gap that was artificially and
uselessly wide. Translating acousmatic and fixed-
medium aesthetics to real-time computer music
practices remains a crucial part of my compositional
aims in all of my works.

The pieces in the “Real-Time Tape Music”
series sample a number of sound files containing
recordings of animals, musical instruments, Und
quirky sounds that evoked careful listening, z.B.,
an amplified recording of soda bubbles in a can.
The samples were played with different speeds,
lengths, directions, and loudness—standard classical
tape techniques. Random processes set the values
for each of these characteristics. Other random
processes selected onset times in the sound files.
Timed events triggered different layered montages
of sounds, creating the structure of the pieces.

As computer processors became more powerful,

else emerged: textural composition. This led to
my first work in textural composition, Real-Time
Tape Music III. No longer sounding simply like
acousmatic music made in real time (as did the first
two works in the series), Real-Time Tape Music
III demanded that I expand my compositional aim,
and my first concept of the sound metaobject arose.
Using ten sound files, each sampling of a file led to
a texture “stream.” Different combinations of the
streams, constructed by ear, formed eight possible
textures.

With so many parameters and textures, I decided
to control the structure of the piece with stochastic
processes. In this case, the timed events controlled
macroscopic form, while a Markov chain determined
which texture would surface at any given moment.
I borrowed the transition table from Xenakis’s
Analogique A+B (vgl. Xenakis 1992).

The final work consists of two contiguous
Bewegungen. The first introduces the textures
through random processes. The second movement
relies on the Markov chain for stochastic movements
between textures. Each movement manifests a
different sound metaobject. The sound metaobjects
that result are large, slowly shifting sound masses,
spanning the audible frequency spectrum. Der
idiosyncratic traits of each sound file establish
memorable moments of sounds that recur in various
Formen. But the montage of these sounds creates a
consistency throughout the work. In this way, Die
metaobjects exist for the duration of the movement,
but the ephemeral inner details emerge, Schicht, Und
evaporate quickly.

Morphons and Bions

Returning to arguments that digitally synthesized
sounds were “flat” or dead, I decided to experiment
with creating textures from synthesized sounds that
were more “living,” or acoustically natural, while
exploring textural composition. I was still interested
in the sound metaobject and acousmatic approaches,
but again, looking for a way to combine them with
real-time computer music.

more of these layers could be added. Eventually,
I layered so many sounds at once that something

One of the most relevant components to acoustic
Geräusche, I believe, is the inherent noise in the sound.

Hagan

37

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u
/
C
Ö
M

J
/

l

A
R
T
ich
C
e

P
D

F
/

/

/

/

4
1
1
3
4
1
8
5
6
6
7
3
/
C
Ö
M
_
A
_
0
0
3
9
5
P
D

.

J

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

In manchen Fällen, that means white noise. But in most,
it means correlated but random fluctuations in the
frequency spectrum. daher, I combined standard
additive synthesis and frequency modulation (FM)
synthesis with noise generators. Several philoso-
phies of noise informed this approach, affecting
the form and direction of the work (Hagan 2012,
2013).

The wide range of sounds in Morphons and Bions

comes from two fundamental synthesis methods,
which in some cases are combined with other
processes. The first is additive synthesis modulated
with white noise. The resulting signal is

X(T) =

6(cid:2)

k=1

Wo

w(T) sin(2π h(T) f0(T)T + D(T)N(T)),

N(T) = white noise;
D(T) = depth (amplitude of white noise), changing

in time;

f0(T) = fundamental frequency, changing in time;
k = partial number;
w(T) = Gaussian random variable with mean 1/k;

Und

H(T) = Gaussian random variable with mean k.

The noise in this synthesizer does not affect
the amplitude of the spectrum directly, aber die
increase of noise affects the sidebands present as
they would in any FM synthesizer. The sidebands are
random, Jedoch. By changing the depth of noise,
the synthesizer can sound like a simple oscillator
bank or nearly white noise at the extremes. Weil
one white-noise generator affects all of the partials,
psychoacoustic fusion enforces the perception of
a complex sound, rather than six separate sounds.
Amplitude envelopes further fuse the synthesizer.
The second basic synthesizer is frequency-
modulated noise. Cascaded band-pass filters with
the same center frequency filter the noise to result
in a strongly colored, almost pitched, noise. Ein
oscillator controls the center frequency of the
filters, creating sidebands while retaining some
quality of the noise. The depth and frequency of
the oscillator determine the sidebands that emerge.

Random processes control the depth and Q of the
filters.

The center frequency of the band-pass filters is

fc = D(T) sin(2π f f ilter t),

where f f ilter is the frequency of the oscillator
controlling the band-pass filter’s central frequency
and D(T) is the depth, or amplitude, of the filter
oscillator, controlled by a random process. Der
sidebands then appear at D(T), D(T) ± f f ilter , D(T) ±
2 f f ilter , D(T) ± 3 f f ilter , usw. (Hagan 2013).

By changing Q, this synthesizer can also range in
noisiness, much like the noise-modulated additive
synthesis.

To create sounds with more complexity, I com-
bined these synthesizers. The frequency-modulated
noise replaced the white noise in the noise-
modulated additive synthesizer, creating partials
with sidebands, forming a rich, quasiharmonic
spectrum.

Amplitude envelopes articulated these synthe-
sizers. Some envelope durations were fixed, Aber
in other cases, random processes determined the
Dauer. In another synthesis technique, short-
ened envelopes with a mean duration of 23 ms,
chained iteratively, added amplitude modulation to
the synthesizer. This created additional sidebands
approximately ±43 Hz around the original sidebands
generated by FM.

Timed changes in the parameters of random
variables determining the timbre of sounds create
the overall form of the work. Generally speaking,
the timbres begin quite noisily. Over time, Sie
become less noisy and more pitched. By the end
of the work, the pitches become stable at integer
frequency relationships to each other. The end of
the piece complements this process: isolated, digital
clicks of instantaneous amplitude changes, vertraut
to anyone working in digital synthesis.

Morphons and Bions addresses questions of noise

from synthesis to form. The definition of noise
is fluid, depending mainly on context. Does it
mean randomness? Does it mean something that
distracts from a signal? Or does it mean a type
of sound, qualified by acoustic characteristics and
psychological response? Erste, acoustic noise is

38

Computermusikjournal

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u
/
C
Ö
M

J
/

l

A
R
T
ich
C
e

P
D

F
/

/

/

/

4
1
1
3
4
1
8
5
6
6
7
3
/
C
Ö
M
_
A
_
0
0
3
9
5
P
D

.

J

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

fairly easy to define, and it was the entry point
for Morphons and Bions. By combining digital
synthesis with correlated noise, sounds became
more alive, interesting, and rich. It is similar to
acoustic noise in traditional instruments, welche
Cowell identified a long time ago (Cowell [1929]
2004). But the movement from noise to pitch is what
John Cage (1958) proposed as the new consonance
and dissonance of modern music (Hagan 2012).

When noise is counterposed to signal, it implies
that there is message to the signal. Aber, in Morphons
and Bions, noise is the signal. The piece relies on
the random, from noise itself to the procedures
controlling the synthesis methods. Aus diesem Grund,
the piece creates an interesting paradox identified
by Kim Cascone’s (2000) post-digital aesthetics of
failure: that noise, which is signal, is no longer noise
(Hagan 2012).

Perhaps the most complicated condition of the
aesthetics of noise in Morphons and Bions comes
from its roots. I was still concerned with textural
Komposition, extending the acousmatic experience
to the sound metaobject, and doing so in real-time.
Als solche, noise is not just signal, but meaning and
cultural artifact (Hagan 2012). The “nonmessage”
as cultural artifact reflects the thoughts of Douglas
Kahn (1999).

Letzten Endes, these processes and philosophies
resulted in textures of a sound metaobject. Like Real-
Time Tape Music III, Morphons and Bions represents
a frozen, gargantuan metaobject inhabited by the
listener, whose inner details mutate over time.

Stochastic Spatialization Technique in HDLAs

I compose and work in Pure Data (Pd), so I created
my spatialization algorithm in that software. Der
controller patches and abstractions can easily be
reused in any composition. Using basic amplitude
panning, my spatialization algorithm takes the
speaker location in the circle (the front arbitrarily
assigned to 0◦, continuing around the space clock-
wise until 360◦). A texture stream is assigned a
virtual angle, controlled in real time by random
processes. The stream is also given a width, or signal
Rock, in degrees. For each speaker, a Pd abstraction

Figur 1. The virtual angle
of the signal (α), speaker
angle (Phi), and the phased
copy of the signal. Der
cosine curve determines

the amplitude of the signal
for any speakers located
within the signal skirt (k).
(Figure from Hagan 2008a.)

first calculates whether the speaker is within the
skirt of the virtual angle. It then uses a cosine curve
to calculate the amplitude of the texture stream for
that speaker. I named this abstraction weightor∼.
The simple calculation is

Amplitude of signal in speaker = cos

(cid:3)

π |α − φ|

k

(cid:4)

,

Wo

k = signal skirt,
α = virtual angle of the signal (texture stream),

Und

φ = speaker location.

By ramping between virtual angles, the signal
moves clearly around the circle. The signal skirt
may be widened for arrays with fewer speakers to
avoid gaps in the sound between speakers.

Figur 1 shows how the amplitude is calculated
for the original signal and its phased copy. I explain
the phased copy, a variable delay moving in and out
of phase, nächste.

Maja Trochimczyk (2001) identifies spatialization
as either a circle (sound around the audience) or a net
(sound throughout the audience). Im Idealfall, textural
composition would exist in a net, d.h., speakers
around and throughout the audience. Most HDLAs
are circles, Jedoch. My spatialization technique
therefore assumes that speakers exist outside the
audience’s physical location. Capitalizing on the
perception of interaural time delays, a copy of the
texture stream is sent to an oscillating variable delay
of a few milliseconds. This delayed, phased copy is

Hagan

39

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u
/
C
Ö
M

J
/

l

A
R
T
ich
C
e

P
D

F
/

/

/

/

4
1
1
3
4
1
8
5
6
6
7
3
/
C
Ö
M
_
A
_
0
0
3
9
5
P
D

.

J

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

Figur 2. The accumulated
effects of the spatialization
Algorithmus: First the simple
circular movement (A),
then the appearance of
center crossing with a

variable-delayed copy
opposing the original (B),
Und, finally, the effect of
reverberation (C). (Figur
from Hagan 2008a.)

Figur 3. Result of
multiple, independent
audio streams. Each curve
is a representation of how
individual texture streams
are moving throughout the

listening space, jede
relying on independent,
combined effects of
panning, delays, Und
reverberation. (Figure from
Hagan 2008a.)

speakers is easy. Simply adding more weightor∼
and phweightor∼ abstractions, one for each
speaker, can adjust the spatialization for any number
or placement of speakers. The signal skirt may need
to be adjusted, sowie, to account for the distance
between speakers.

At the same time as I was developing textural
Komposition, I built the Spatialization and Auditory
Display Environment (SpADE) at the University
of Limerick. This 32.2-speaker system can be
configured as needed. I used a configuration of 16
speakers placed symmetrically at eye level with
an additional matching speaker placed above: A
lower and higher ring of speakers evenly distributed
around a circle.

The algorithm did not port well to elevated
speakers. One solution would have been to inte-
grate vector base amplitude panning (VBAP; sehen
Pulkki 1997) into the patch, but concerns about
processor demands arose. The sophistication and
verisimilitude of VBAP was not necessary for the
spatialization environment, entweder, because realis-
tic locations and movements of sounds were not
essential.

I experimented with assigning false angles to
the upper speakers. One method was to assign the
lower speakers values from 0◦ to 180◦ and the upper
speakers 180◦ to 360◦, as seen in Figure 4. The result
was that the sounds moved more quickly through
the spaces, but signals did not move from the lower
ring to the upper ring except at one point: the front
speaker.

sent to a virtual angle 180◦ from the original (sehen
Figur 1). The result is that not only does the signal
appear to circle the listener, but it also appears to
cross the center along the way. This approximates
a “net” when only a “circle” exists. Das war
combined with the amplitude calculation above
in an abstraction called phweightor∼. Figur 2
depicts the accumulated effects of the spatialization
Algorithmus.

When multiple texture streams, making up a
single texture, are spatialized independently, Die
result is a swarming, motile sound metaobject
surrounding the listener. By introducing a simple,
cheap reverberation patch, distance and space
become even larger. Random processes control the
wet/dry mix, varying the distance of the sound, als
well. There is no need for sophisticated reverberation
because the layers of sound and motion obscure the
details of reverberant sound. Figur 3 zeigt die
result of multiple, independent audio streams.

This algorithm was originally designed for any
number of horizontal speakers. Scaling azimuth

40

Computermusikjournal

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u
/
C
Ö
M

J
/

l

A
R
T
ich
C
e

P
D

F
/

/

/

/

4
1
1
3
4
1
8
5
6
6
7
3
/
C
Ö
M
_
A
_
0
0
3
9
5
P
D

.

J

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

Figur 4. One possible
configuration of false
speaker angles.

Figur 5. Assigned speaker
angles for SpADE.

Figur 4

Figur 5

Stattdessen, I assigned the upper and lower rings

Adapting Textural Composition to Fixed Medium

alternating angles, shown in Figure 5.

In this configuration, the rate of motion was the
same, but sounds moved up and down as well. I did
not need the long, coherent trajectories of path-based
spatialization. This adaptation to elevated speakers
fared well, creating a sense of turbulent, moving
sounds surrounding the listener on all sides.

Der 2015 conference of the Society for Electro-
Acoustic Music in the United States (SEAMUS)
provided an opportunity to present a work in
the Cube, a four-level, 124.4-channel array at
the Moss Arts Center at Virginia Tech (see Lyon
et al. 2016 for more detail on the Cube). Der

Hagan

41

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u
/
C
Ö
M

J
/

l

A
R
T
ich
C
e

P
D

F
/

/

/

/

4
1
1
3
4
1
8
5
6
6
7
3
/
C
Ö
M
_
A
_
0
0
3
9
5
P
D

.

J

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

technical specifications of the space required a
fixed-medium composition. Auch, the array has
three tiers of speakers and ceiling speakers. Das
venue thus required me to adjust my process.
Erste, I could not generate the piece in real time.
Although real-time composition is important to
me, fixed media allowed me to use a much more
computationally expensive synthesis algorithm
with many more texture streams. Zweite, Mein
spatialization algorithm, though adjustable as
detailed earlier, could not be implemented because
the synthesis could not be generated in real time, nor
would false angles accommodate the large number
of elevated speakers. The situation required me to
rethink textural composition spatialization. Der
result was Cubic Zirconia.

Texture Synthesis in Cubic Zirconia

The most common stochastic process in my works
is the Markov chain. In a discussion with Miller
Puckette, he proposed another approach to gener-
ating numbers. He suggested that composers who
use Markov chains want the results of the process
to be more uniform; if there is a 75 percent chance
of an occurrence, then all previous results should
only have 75 percent of that occurrence at any given
Zeit. This is not what I want from my Markov
processes, but I found the notion irresistible.

The algorithm that Puckette designed, called z12,

takes a set of twelve probabilities for the numbers
0 Zu 11. It tallies how many of each number has
already occurred, then outputs the number that
has the greatest deficit. In this way, one can be
assured that, at any given point, the percentage
of each number closely matches the probability
of that number. For simple ratios, this creates a
fixed, repeating pattern. When the probabilities are
irrational, Jedoch (z.B., golden ratios), the patterns
that emerge are quite complex. Certain sequences
can repeat, or nearly repeat, with minor differences.
Sections of sequences can repeat occasionally. Dort
is no small number of iterations, Jedoch, for which
an exact repetition of sequences occurs. At this
Punkt, we cannot prove that the system will or will
not exactly repeat at all (Puckette 2015).

At first, I applied this to larger events, much as
I use Markov chains. There is room for discovery
at this level, but my initial results were under-
whelming. Dann, Puckette programmed the patch
to output a one or zero, based on the sequence of
Zahlen. Zum Beispiel, if three follows two, without
a four intervening, generate ones until a five appears,
and then generate zeros until a three follows a two
wieder. I called this the “logic patch” because *∼
objects act as AND gates, sending out ones and
zeros. Then we accelerated the output to audio rate,
creating samples of ones and zeros. The result was
a complex waveform that transformed both micro-
scopically and macroscopically. Occasionally, Die
process would go silent, starting up again when the
correct sequences returned. Because z12 does not
repeat for a significant number of outputs, a single
z12 can generate a complex waveform that does not
sound the same for the average duration of a work.
We have not listened longer than ten minutes, Aber
in that time, no repetitions appeared.

If one seeds z12 with initial values (mit anderen Worten,

tell it that it has already had a certain amount of
each number), the output changes dramatically,
even when the probabilities remain unchanged.
This makes the z12 algorithm incredibly powerful
for generating materials. Bedauerlicherweise, at this
point it still has to be done by ear, weil wir
cannot predict the results of different seeds when
irrational probabilities are used. Gleichzeitig,
if all the inputs and probabilities are the same, z12
will output the same string of numbers. Puckette
provides more detail on the mathematics behind
z12 in his paper on maximally uniform sequences
(Puckette 2015).

Cubic Zirconia uses 19 z12s. Each z12 uses one
of ten different seeds and one of four different logic
patches, but there is no duplicate combination of
seed or logic. This creates nineteen independent
texture streams with significantly different timbres.
I used four z12s created unaltered texture streams;

15 others were passed through four high-order
band-pass filter banks with center frequencies
characterized as low (minimum 20 Hz), mid-low
(minimum 160 Hz), mid-high (minimum 640 Hz),
and high (minimum 2560 Hz). Because the z12
creates highly complex waveforms, the output of

42

Computermusikjournal

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u
/
C
Ö
M

J
/

l

A
R
T
ich
C
e

P
D

F
/

/

/

/

4
1
1
3
4
1
8
5
6
6
7
3
/
C
Ö
M
_
A
_
0
0
3
9
5
P
D

.

J

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

the filters can be quite different for a single z12. A
1/ f random walk controls the center frequencies
of the filters. At the start and end of the piece, Die
steps of the walk are very small, fixing the filters
at a nearly constant set of frequencies starting at
the minimum values. At the peak of the piece,
the steps become quite large. Because it is a 1/ F
random walk, extreme frequency jumps are unusual,
but because the smallest step is large, the texture
streams spread out in the spectrum, opening spectral
space in the texture. An exponentially distributed
random variable controls the duration of the filter
step, starting and ending with longer durations, Aber
moving quickly at the peak. The result is that the
timbre at the beginning and end of the work is static,
while it moves, jumps, and changes frenetically at
the apex. This process suggests that the listener is
magnifying the texture and seeing greater detail over
the course of the work.

The four filter outputs of each z12 are sent
through independent delays starting at 0 Sek (NEIN
delay), 10 Sek, 25 Sek, Und 37 Sek. Each of these, Dann,
is sent through oscillating delays that adjust the
delays up to 2.5 sec more. The different outputs of
the filters, the long base delays, and the fluctuating
smaller delays ensure that each texture stream
appears to be an independent voice. This reduces the
number of z12 processes needed for the final texture.
The overall length of the work is approximately
8.5 min. The four, unaltered texture streams start
and end the piece, playing throughout the work.
They act as the substrate on which the rest of
the texture resides. The piece accumulates texture
streams from the filter banks for the first 3 min.
The full texture is then developed through the filter
bank settings for approximately 5 minutes. Im
last 20 Sek, the texture streams dissipate. I took care
to ensure the most consistent amount of spectral
density, making sure that low, mid-low, mid-high,
and high frequency texture streams were as present
as possible.

Spatialization in the Cube

Given that the Cube has multiple elevated tiers and
that at the time it could only present fixed medium

funktioniert, my usual spatialization method would have
required a few things that were not technically
möglich. Erste, I would have to determine the
speaker angles necessary to make sure Catwalks
2 Und 3 and the ceiling would get equivalent but
different materials compared with the 64 speakers
on the ground level. Zweite, alle 19 z12s would have
to run at the same time. Endlich, alle 124 Kanäle
would have to be recorded to a 124-channel AIFF,
something that occasionally failed. Given time
constraints, I could not determine which software
was failing in reading AIFFs with very high track
Zahlen.

I mapped the four unaltered z12 outputs to four
loudspeakers at “compass points” at ground level:
front center, links, Rechts, and back center. Diese
anchored the rest of the texture. Then I paired
each of the remaining 60 ground speakers with
one speaker as diametrically opposed as possible
on the higher levels. Zum Beispiel, Speaker 1 Bei der
right-front corner of the ground level was paired
with Speaker 75, the left-back corner of Catwalk
2. I worked around the ground-level ring, pairing
each speaker with one above and across the room.
In this way, each texture stream had its own pair
of loudspeakers: 15 z12s with four texture streams
jede, applied to one of 60 pairs. I then used a uniform
random variable determining where between the
two speakers a texture stream existed, with a
simple cosine curve for amplitude panning. Als
in my original methods, I also applied a uniform
random variable to control the wet/dry balance of a
reverberation patch.

I planned the texture so that low, mid-low,
mid-high, and high texture streams were as evenly
distributed in space as possible, just as I made them
evenly distributed in time. I recorded eight texture
streams at a time (two z12s with four outputs each)
Zu 16 channels in Pd. In Audacity, I arranged each
output according to the channel assignment in the
Cube. The full piece is provided as 124 mono AIFFs,
a requirement for presenting in the Cube at the
Zeit.

Given the spectral consistency, the spatial con-

sistency, and the overall form of the work, Die
piece achieved the aims of textural composition:
highly mobile space; konsistent, slowly changing

Hagan

43

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u
/
C
Ö
M

J
/

l

A
R
T
ich
C
e

P
D

F
/

/

/

/

4
1
1
3
4
1
8
5
6
6
7
3
/
C
Ö
M
_
A
_
0
0
3
9
5
P
D

.

J

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

metaobject; and overall surround without a front-
facing orientation. The high number of speakers
makes the work clear and open. Surprisingly, Die
piece feels less dense but more extensive than in
smaller arrays. For auditioning purposes, Die 124
channels were mapped to the nearest, congruent
speakers in SpADE, set to 28 speakers at the time,
and the piece was closer and less penetrable. Dort
is significantly less masking when texture streams
have their own channels. This suggests that tex-
tural composition requires further thought when
mapping from smaller HDLAs to larger HDLAs.
In the same way that the portable eight-channel
version of Cubic Zirconia uses considerably fewer
texture streams, other HDLAs may require a dif-
ferent number of texture streams to maintain the
experience of the piece. More investigation must
be done to determine the consequences of scaling
between different HDLAs for both the creation and
experience of textural composition.

Abschluss

Textural composition is concerned with the sound
metaobject, an idea derived from acousmatic aes-
thetics but transformed into real-time practice. Es
requires a few things to be true. Erste, the sound must
be complex, consisting of many smaller sounds but
bound into a whole object. Zweite, the sound must
be large, surrounding the listener and extending
beyond the perceivable horizon. Endlich, the sound
must not favor a perspective, and it must not have a
frontal orientation.

The spatialization of the metaobject, daher,
must be fully encompassing. It must be quickly
mobile but must not gel into coherent trajectories
of its internal parts. Zusätzlich, there needs to be
an average consistency to the spatialization in all
directions. This leads to a practice that balances
between point-source spatialization techniques and
mimetic aesthetics. My spatialization algorithm
allows for many texture streams, which make
up a single texture, to be spatialized independently,
statistically equal, and in all directions. This method
is scalable to any number of speakers with minor
patching changes. This algorithm does not account

for elevated speakers, Jedoch. I devised a solution
to this problem by providing false speaker locations
that cause the sounds to move vertically as well as
horizontally.

In composing for the Cube, I faced additional
constraints. Erste, the piece had to be fixed. Zweite,
there were three tiers of elevated speakers, einschließlich
an array on the ceiling. I had to adjust my real-time
procedures to survive in a fixed form. This allowed
for other advantages, however—namely, Verarbeitung
Kapazität. Likewise, spatialization could not be done
in real time, and my schemes for elevated speakers,
d.h., false speaker locations, failed. daher, ICH
had to more carefully conceive of spatialization
that borrowed from point-source and mimetic
Verfahren.

My methods of working in textural composition

resulted in fairly equivalent impressions in 4- Zu
32-channel arrays. Trotzdem, the piece for 124
channels is drastically different when reduced to
smaller HDLAs. Das deutet darauf hin, although the
spatialization may be scalable, material differences
are also necessary when moving between signifi-
cantly different speaker numbers. This is clearly
obvious in other compositional practice, but not
necessarily so in textural composition.

Further work in the Cube or similar places, als
well as auditioning in SpADE (a smaller HDLA),
could lead to basic compositional practices that
would ensure a more consistent experience of
Material, even if spatialization were enhanced by
greater speaker arrays. As more HDLA environments
become available, additional work will be possible
in different spaces.

Verweise

Austin, L. 2000. “Sound Diffusion in Composition and
Performance: An Interview with Denis Smalley.”
Computermusikjournal 24(2):10–21.

Austin, L. 2001. “Sound Diffusion in Composition and
Performance Practice II: An Interview with Ambrose
Field.” Computer Music Journal 25(4):21–30.

Barrett, N. 2002. “Spatio-Musical Composition Strate-

gies.” Organised Sound 7(3):313–323.

Verbrennungen, C. 2006. “Compositional Strategies for Point-

Source Spatialization.” eContact! 8.3. Available online

44

Computermusikjournal

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u
/
C
Ö
M

J
/

l

A
R
T
ich
C
e

P
D

F
/

/

/

/

4
1
1
3
4
1
8
5
6
6
7
3
/
C
Ö
M
_
A
_
0
0
3
9
5
P
D

.

J

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

at cec.concordia.ca/econtact/8 3/burns.html. Zugriff
16 September 2016.

Cage, J. 1958. “The Future of Music: Credo.” Silence:

Lectures and Writings. London: Marion Boyars, S. 3–6.

Cascone, K. 2000. “The Aesthetics of Failure: ‘Post-

Digital’ Tendencies in Contemporary Computer Mu-
sic.” Computer Music Journal 24(4):12–18.

Cowell, H. (1929) 2004. “The Joys of Noise.” The New

Republik, 31 Juli, P. 287. Reprint C. Cox and D. Warner,
Hrsg. Audio Culture: Readings in Modern Music. Neu
York City: Kontinuum, S. 22–24.

Hagan, K. 2008A. “Textural Composition: Implementation
of an Intermediary Aesthetic.” In Proceedings of the
International Computer Music Conference, S. 509–
514.

Hagan, K. 2008B. “Textural Composition and its Space.”
In Proceedings of the Sound and Music Computing
Conference, S. 100–106.

Hagan, K. 2012. “Aesthetic and Philosophical Implications
of Noise in Morphons and Bions (2011).” In Proceedings
of the Music, Geist, and Invention Workshop. Verfügbar
online at www.tcnj.edu/∼mmi/papers/Paper20.pdf.
Accessed November 2016.

Hagan, K. 2013. “The Noise of Morphons and Bions.”
In Proceedings of the International Computer Music
Conference, S. 376–379.

Normandeau, R. 2009. “Timbre Spatialisation: Der

Medium Is the Space.” Organised Sound 14(3):277–
285.

Otondo, F. 2008. “Contemporary Trends in the Use of
Space in Electroacoustic Music.” Organised Sound
13(1):77–81.

Puckette, M. 2015. “Maximally Uniform Sequences
from Stochastic Processes.” Paper presented at the
Conference of the Society for Electro-Acoustic Music in
Die Vereinigten Staaten, 26–28 March, Blacksburg, Virginia.
Pulkki, V. 1997. “Virtual Sound Source Positioning Using
Vector Base Amplitude Panning.” Journal of the Audio
Engineering Society 5(6):456–466.

Rolfe, C. 1999. “A Practical Guide to Diffusion.”

eContact! 2.4. Available online at econtact.ca/2 4
/pracdiff.htm. Zugriff 29 September 2016.

Sazdov, R., G. Paine, and K. Stevens. 2007. “Perceptual
Investigation into Envelopment, Spatial Clarity, Und
Engulfment in Reproduced Multi-Channel Audio.”
In Proceedings of the 31st Audio Engineering So-
ciety International Conference. Online verfügbar unter
www.aes.org/e-lib/browse.cfm?elib=13961 (subscrip-
tion required). Zugriff 9 November 2016.

Schafer, R. M. 1994. Our Sonic Environment and the
Soundscape: The Tuning of the World. Rochester,
Vermont: Destiny.

Harrison, J. 1998. “Sound, Space, Sculpture: Some

Smalley, D. 1997. “Spectromorphology: Explaining Sound-

Thoughts on the ‘What’, ‘How’ and ‘Why’ of Sound
Diffusion.” Organised Sound 3(2):117–127.

Shapes.” Organised Sound 2(2):107–126.

Smalley, D. 2007. “Space-Form and the Acousmatic

Kahn, D. 1999. Noise Water Meat. Cambridge, Mas-

Image.” Organised Sound 12(1):35–58.

sachusetts: MIT Press.

Leider, C. 2007. “Multichannel Audio in Electroacoustic
Music.” In Proceedings of the IEEE International
Conference on Multimedia and Expo, S. 1890–1893.
Lyon, E., et al. 2016. “Genesis of the Cube: The Design
and Deployment of an HDLA-Based Performance and
Research Facility.” Computer Music Journal 40(4):62–
78.

Trochimczyk, M. 2001. “From Circles to Nets: Auf der

Signification of Spatial Sound Imagery in New Music.”
Computermusikjournal 25(4):39–56.

Truax, B. 1998. “Composition and Diffusion: Space in
Sound in Space.” Organised Sound 3(2):141–146.

Xenakis, ICH. 1992. Formalized Music: Thought and Mathe-
matics in Composition. Revised ed. Stuyvesant, Neu
York: Pendragon Press.

Hagan

45

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u
/
C
Ö
M

J
/

l

A
R
T
ich
C
e

P
D

F
/

/

/

/

4
1
1
3
4
1
8
5
6
6
7
3
/
C
Ö
M
_
A
_
0
0
3
9
5
P
D

.

J

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3
PDF Herunterladen