Vesa Norilo

Vesa Norilo
Centre for Music and Technology
University of Arts Helsinki, Sibelius
Academia
PO Box 30
FI-00097 Uniarts, Finland
vnorilo@siba.fi

Kronos: A Declarative
Metaprogramming
Language for Digital
Signal Processing

Abstracto: Kronos is a signal-processing programming language based on the principles of semifunctional reactive
sistemas. It is aimed at efficient signal processing at the elementary level, and built to scale towards higher-level tasks
by utilizing the powerful programming paradigms of “metaprogramming” and reactive multirate systems. The Kronos
language features expressive source code as well as a streamlined, efficient runtime. The programming model presented
is adaptable for both sample-stream and event processing, offering a cleanly functional programming paradigm for a
wide range of musical signal-processing problems, exemplified herein by a selection and discussion of code examples.

Signal processing is fundamental to most areas
of creative music technology. It is deployed on
both commodity computers and specialized sound-
processing hardware to accomplish transformation
and synthesis of musical signals. Programming these
processors has proven resistant to the advances in
general computer science. Most signal processors
are programmed in low-level languages, such as C,
often thinly wrapped in rudimentary C++. Such a
workflow involves a great deal of tedious detail, como
these languages do not feature language constructs
that would enable a sufficiently efficient imple-
mentation of abstractions that would adequately
generalize signal processing. Although a variety
of specialized musical programming environments
have been developed, most of these do not enable
the programming of actual signal processors, forcing
the user to rely on built-in black boxes that are
typically monolithic, inflexible, and insufficiently
general.

In this article, I argue that much of this stems
from the computational demands of real-time signal
Procesando. Although much of signal processing is
very simple in terms of program or data structures,
it is hard to take advantage of this simplicity in a
general-purpose compiler to sufficiently optimize
constructs that would enable a higher-level signal-
processing idiom. As a solution, I propose a highly
streamlined method for signal processing, starting
from a minimal dataflow language that can describe
the vast majority of signal-processing tasks with a

Computer Music Journal, 39:4, páginas. 30–48, Invierno 2015
doi:10.1162/COMJ a 00330
C(cid:2) 2015 Instituto de Tecnología de Massachusetts.

handful of simple concepts. This language is a good
fit for hardware—ranging from CPUs to GPUs and
even custom-made DSP chips—but unpleasant for
humans to work in. Human programmers are instead
presented with a very high-level metalanguage,
which is compiled into the lower-level data flow.
This programming method is called Kronos.

Musical Programming Paradigms
and Environments

The most prominent programming paradigm for
musical signal processing is the unit generator
idioma (Roads 1996, páginas. 787–810). Some examples
of “ugen” languages include the Music N family (arriba
to the contemporary Csound; see Boulanger 2000), como
well as Pure Data (Puckette 1996) and SuperCollider
(McCartney 2002).

The Unit Generator Paradigm

The success of the unit generator paradigm is driven
by the declarative nature of the ugen graph; el
programmer describes data flows between primitive,
easily understood unit generators.

It is noteworthy that the typical selection of
ugens in these languages is very different from the
primitives and libraries available in general-purpose
idiomas. Whereas languages like C ultimately
consist of the data types supported by a CPU and the
primitive operations on them, a typical ugen could
be anything from a simple mathematical operation
to a reverberator or a pitch tracker. Ugen languages

30

Computer Music Journal

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu
/
C
oh
metro

j
/

yo

a
r
t
i
C
mi

pag
d

F
/

/

/

/

3
9
4
3
0
1
9
5
3
7
4
9
/
C
oh
metro
_
a
_
0
0
3
3
0
pag
d

.

j

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

tend to offer a large library of highly specialized
ugens rather than focusing on a small orthogonal
set of primitives that could be used to express any
desired program. The libraries supplied with most
general-purpose languages tend to be written in
those languages; this is not the case with the typical
ugen language.

The Constraints of the Ugen Interpreter

I classify the majority of musical programming
environments as ugen interpreters. Such environ-
ments are written in a general-purpose programming
idioma, along with the library of ugens that are
available for the user. These are implemented in
a way that allows late binding composition: el
ugens are designed to be able to connect to the
inputs and outputs of other, arbitrary ugens. Este
is similar to how traditional program interpreters
work—threading user programs from predefined
native code blobs—hence the term ugen interpreter.
In this model, ugens must be implemented via
parametric polymorphism: as a set of objects that
share a suitable amount of structure, to be able to
interconnect and interact with the environment,
regardless of the exact type of the ugen in ques-
ción. Dynamic dispatch is required, as the correct
signal-processing routine must be reached through
an indirect branch. This is problematic for contem-
porary hardware, as the hardware branch prediction
relies on the hardware instruction pointer: en un
interpreter, the hardware instruction pointer and
the interpreter program location are unrelated.

A relevant study (Ertl and Gregg 2007) cites
misprediction rates of 50 por ciento a 98 percent for
typical interpreters. The exact cost of misprediction
depends on the computing hardware and is most
often not fully disclosed. On a Sandy Bridge CPU by
Intel, the cost is typically 18 clock cycles; as a point
of reference, the chip can compute 144 floating-point
operations in 18 cycles at peak throughput. Allá
are state-of-the-art methods (Ertl and Gregg 2007;
Kim et al. 2009) to improve interpreter performance.
Most musical programming environments choose
instead a simple, yet reasonably effective, medio
of mitigating the cost of dynamic dispatch. Audio

processing is vectorized to amortize the cost of
dynamic dispatch over buffers of data instead of
individual samples.

Consider a buffer size of 128 muestras, cual es

often considered low enough to not interfere in
a real-time musical performance. For a ugen that
semantically maps to a single hardware instruction,
misprediction could consume 36 por ciento a 53
percent of the time on a Sandy Bridge CPU, como
derived from the numbers previously stated. A
reduce the impact of this cost, the ugen should
spend more time doing useful work.

Improving the efficiency of the interpreter could
involve either increasing the buffer size further or
increasing the amount of useful work a ugen does per
dispatch. As larger buffer sizes introduce latency,
ugen design is driven to favor large, monolithic
bloques, very unlike the general-purpose primitives
most programming languages use as the starting
punto, or the native instructions of the hardware.
Además, any buffering introduces a delay
equivalent to the buffer size to all feedback connec-
tions in the system, which precludes applications
such as elementary filter design or many types of
physical modeling.

An Ousterhout Corollary

John Ousterhout’s dichotomy claims that program-
ming languages are either systems programming
languages or scripting languages (Ousterhout 1998).
To summarize, the former are statically typed, y
produce efficient programs that operate in isolation.
C is the prototypical representative of this group.
The latter are dynamically typed, less concerned
with efficiency, intended to glue together distinct
components of a software system. These are repre-
sented by languages such as bash, or Ousterhout’s
own Tcl.

The Ousterhout dichotomy is far from universally

aceptado, although an interesting corollary to
musical programming can be found. Unit generators
are the static, isolated, and efficient components
in most musical programming languages. Ellos
are typically built with languages aligned with
Ousterhout’s systems programming group. El

Norilo

31

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu
/
C
oh
metro

j
/

yo

a
r
t
i
C
mi

pag
d

F
/

/

/

/

3
9
4
3
0
1
9
5
3
7
4
9
/
C
oh
metro
_
a
_
0
0
3
3
0
pag
d

.

j

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

scripting group is mirrored by the programming
surfaces such as the patching interface described by
Miller Puckette (1988) or the control script languages
in ChucK (Wang and Cook 2003) or SuperCollider.
A menudo, these control languages are not themselves
capable of implementing actual signal-processing
routines with satisfactory performance, as they
focus on just acting as the glue layer between black
boxes that do the actual signal processing.

A good analysis of the tradeoffs and division
of labor between “systems” languages and “glue”
languages in the domain of musical programming
has been previously given by Eli Brandt (2002, páginas.
3–4). Although the “glue” of musical programming
has constantly improved over the last decades, el
“systems” part has remained largely stagnant.

Beyond Ugens

To identify avenues for improvement, let us first ex-
amine a selection of musical programming languages
that deviate from the standard ugen interpreter
modelo.

Common Lisp Music: Transcompilation

Common Lisp Music (CLM, see Schottstaedt 1994)
is an implementation of the Music N paradigm
in Common Lisp. Curiosamente, CLM attempts to
facilitate the writing of signal-processing routines
in Lisp, a high-level language. This is accomplished
by means of transcompilation: CLM can generate
a C-language version of a user-defined instrument,
including compiler-generated type annotations,
enabling robust optimization and code generation.
Only a narrow subset of Lisp is transcompiled
by CLM, sin embargo. This subset is not, En realidad, sig-
nificantly different from low-level C—a lower level
of abstraction than in standard C++. En efecto, CLM
code examples resemble idiomatic C routines, albeit
written in S-expressions. Although the metapro-
gramming power of Lisp could well be utilized to
generate a transcompilable program from a higher-
level idiom, this has not been attempted in the
context of CLM.

Nyquist: Signals as Values

Nyquist (Dannenberg 1997) is a Lisp-based synthesis
environment that extends the XLisp interpreter
with data types and operators specific to signal
Procesando. The main novelty here is to treat signals
as value types, which enable user programs to
inspect, modify, and pass around audio signals
in their entirety without significant performance
penalties. Composition of signals rather than ugens
allows for a wider range of constructs, especially
regarding composition in time.

As for DSP, Nyquist remains close to the ugen
interpreter model. Signals are lazily evaluated, y
buffered to improve efficiency; the concerns here
are identical to those discussed in the section “The
Constraints of the Ugen Interpreter.” The core
processing routines are in fact written in the C
idioma. Nyquist presents an alternative, arguably
more apposite, model of interacting with the signal
flow, but the programmer is still constrained to
merely composing relatively monolithic routines
written in C.

As a significant implementation detail, Nyquist
utilizes automated code generation for its operators.
According to Roger Dannenberg (1997), this is
to avoid errors in the formulaic but complicated
infrastructure related to matching the static C
code to the context of the current user program—
handling mismatched channel counts, sample rates,
and timings. This could be seen as a nascent form of
metaprogramming: generation of low-level signal-
processing primitives from a higher-level description
to bypass the tedious, error-prone boilerplate code
that comes with imperative signal processing.
Nyquist does not, sin embargo, seem to make any
attempt to generalize this capability or, en efecto, a
offer it to end users.

SuperCollider: Programmatic Ugen Graphs
and Parameterization

SuperCollider proposes a two-layer design, offering
the user a ugen interpreter system running as a
server process and a control scripting environment
designed for musical programming and building the

32

Computer Music Journal

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu
/
C
oh
metro

j
/

yo

a
r
t
i
C
mi

pag
d

F
/

/

/

/

3
9
4
3
0
1
9
5
3
7
4
9
/
C
oh
metro
_
a
_
0
0
3
3
0
pag
d

.

j

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

ugen graphs. The graphs themselves are interpreted.
SuperCollider offers the option of processing the
graph per sample, but performs poorly in this
configuración.

An interesting idea in SuperCollider is a form of
ugen parameterization: channel expansion. Vectors
of parameters can be applied to ugens, which then
become vectorized. This programming technique is
effectively functional polymorphism: The behavior
of the ugen is governed by the type of data fed into
él.

The key benefit is that variants of a user-defined

signal path can easily be constructed ad hoc. El
programmer does not need to go through the entire
ugen pipeline and adjust it in multiple places to
accommodate a new channel count. The system can
infer some pipeline properties from the type of input
señal; the pipeline is parameterized by the input
type—namely, channel count. This aids in reusing
an existing design in new contexts.

PWGLSynth and ChucK: Finegrained
Interpretation

PWGLSynth (Laurson and Norilo 2006) and ChucK
(Wang and Cook 2003) are implemented as ugen
interpreters, but they operate on a per-sample basis.
In PWGLSynth, this design choice results from the
desire to support a variety of physics-based models,
in which unit-delay recursion and precise signal
timings are required. ChucK also requires a high
time resolution, as it is based on the premise of
interleaving the processing of a high-level control
script and a conventional ugen graph with accurate
timing. Such a design takes a severe performance hit
from the fine-grained interpretation, but does not
prevent these systems from supporting a wide range
of synthesis and analysis tasks in real time.

Both environments feature a synchronous au-

dio graph with pull semantics, offering special
constructs for asynchronous push semantics, estafa-
sidered useful for audio analysis (Norilo and Laurson
2008b; Wang, Fiebrink, and Cook 2007). PWGLSynth
is best known for its close integration to the PWGL
sistema, including the latter’s music-notation facil-
ities. ChucK’s main contribution is to enhance the
ugen graph paradigm with an imperative control

script, with the capability to accurately time its
intervenciones.

Extempore/XTLang: Dynamic Code Generation

Andrew Sorensen’s Extempore has recently gained
signal-processing capabilities in the form of XTLang
(Sorensen and Gardner 2010). XTLang is a systems-
programming extension to Lisp, offering a thin
wrapper over the machine model presented by LLVM
(Lattner and Adve 2004) along with a framework
for region-based memory management. XTLang is
designed as a low-level, high-performance language,
and in many cases it requires manual data-type
annotations and memory management. The design
of Extempore/XTLang is notable in pursuing a high
degree of integration between a slower, dynamic,
high-level idiom, and an efficient low-level machine
representación. In effect, the higher-level language
can drive the XTLang compiler, generating and
compiling code on demand.

Faust: Rethinking the Fundamentals

An important example of a language designed
for and capable of implementing ugens is Faust
(Orlarey, Fober, and Letz 2009). Faust utilizes func-
tional dataflow programming to enable relatively
high-level description and composition of signal-
processing fundamentals.

The core principle behind Faust is the com-
position of signal-processing functions: “block
diagrams,” in the Faust vernacular. Primitives can
be combined in several elementary routings, en-
cluding parallel, serial, and cyclic signal paths. Este
programming model discards the imperative style
in favor of a declarative description of signal paths,
allowing eloquent and compact representation of
many signal-processing algorithms.

Más importante, Faust can compose functions
on the very lowest level, with sample granularity
and no dispatch overhead. This is possible because
Faust performs whole-program compilation, using C
as the intermediate representation of a static signal-
flow graph. A custom compiler is a significant
technical achievement, allowing Faust to overcome
the limits of interpreters as discussed previously.

Norilo

33

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu
/
C
oh
metro

j
/

yo

a
r
t
i
C
mi

pag
d

F
/

/

/

/

3
9
4
3
0
1
9
5
3
7
4
9
/
C
oh
metro
_
a
_
0
0
3
3
0
pag
d

.

j

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Mesa 1. Some Musical Programming Environments

Environment

Scheme

Per Sample

Características

CLM
Nyquist
SuperCollider
PWGLSynth
ChucK
Extempore
Faust

Transcompiler
Interpreter
Interpreter
Interpreter
Interpreter
Interpreter/Compiler
Compiler

X

See note
X
X
X
X

Low-level DSP in Lisp
Signals as values
Ugen graph generation
Score integration
Strong timing
Dynamic code generation
High-level DSP

Nota: SuperCollider can use very short buffers; en la práctica, sin embargo, this can become
prohibitive in terms of performance.

Summary of Surveyed Environments

Towards Higher-Level Signal Processing

The environments surveyed are summarized in
Mesa 1. Common Lisp Music, PWGLSynth, ChucK,
Extempore, and Faust are capable of operating per
sample, making them viable candidates for the fun-
damentals of signal processing. For SuperCollider,
this capability exists in theory, but is not useful in
practice owing to low performance.

Extempore and CLM in effect wrap a C-like stack-

machine representation in an S-expression syntax.
The programming models do not differ significantly
from programming in pure C, and in many cases are
lower level than standard C++. Although Extempore
is interesting in the sense that signal processors can
be conveniently and quickly created from Lisp, él
does not seem to be designed to tackle the core issue
of signal processing in a new way.

PWGLSynth and ChucK offer a ugen graph rep-

resentation capable of operating on the sample
nivel. As interpreters, these systems fall far be-
low theoretical machine limits in computational
actuación. The desire to achieve adequate per-
formance has likely governed the core design of
these environments—both feature large, monolithic
ugens that can only be composed in very basic
maneras.

Faust is the project that is most closely aligned

with the goals of the present study. Discarding
the ugen model entirely, it is a signal-processing
language designed from the ground up for the task
of high-level representation of common DSPs with
a very high performance.

As is evident from the survey in the section “Beyond
Ugens,” solutions and formulations that address
musical programming on a higher level—those that
constitute the Ousterhoutian “glue”—are plentiful.
Their lower-level counterparts are far fewer. Solo
Faust is competitive with C/C++, if one desires to
design filters, oscillators, or physical models from
scratch. The objective of the present study is to
explore and develop this particular domain.

A Look at Faust

Programming in Faust is about the composition of
block diagrams. At the leaves of its syntax tree are
funciones, such as sin, porque, o 5 (interpreted as a
constant-valued function). These can be composed
using one of the five operators merge, dividir, sequen-
tial, parallel, or recursive composition. In terms
of a signal graph, the leaves of the Faust syntax
tree are the nodes, and the composition operators
describe the edges. The Faust syntax tree is therefore
topologically quite far removed from the actual
signal-flow graph. The programs are compact, to the
point of being terse. An example of a biquad filter
implemented in Faust is shown in Figure 1; this is
an excerpt from the Faust tutorial.

Faust is a pure functional language (Strachey
2000). Programs have no state, yet Faust is capable
of implementing algorithms that are typically
stateful, such as digital filters and delay effects.

34

Computer Music Journal

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu
/
C
oh
metro

j
/

yo

a
r
t
i
C
mi

pag
d

F
/

/

/

/

3
9
4
3
0
1
9
5
3
7
4
9
/
C
oh
metro
_
a
_
0
0
3
3
0
pag
d

.

j

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Cifra 1. Biquad filter in
Faust.

biquad(a1,a2,b0,b1,b2) = + ˜ conv2(a1,a2) : conv3(b0,b1,b2)
con {

conv3(k0,k1,k2,x) = k0*x + k1*x’ + k2*x’’ ;
conv2(k0,k1,x) = k0*x + k1*x’ ;

};

This is accomplished by lifting signal memory to
a language construct. Faust offers delay operators,
in addition to the recursive composition operator
that introduces an implicit unit delay. By utilizing
these operators, Faust functions are pure functions
of current and past inputs.

Abstraction at a more sophisticated level has
been available since Faust was enhanced with a
term-rewriting extension by Albert Gr ¨af (2010).
Faust functions can now change their behavior
based on pattern matching against the argument.
As the arguments are block diagrams, this is a
form of functional polymorphism with regard to the
topology of signal graphs.

En resumen, Faust defines a block-diagram alge-
bra, which is used to compose an audio-processing
function of arbitrary complexity. This function
describes a static signal flow graph, which can be
compiled into efficient C++ code.

Introducing Kronos

This section provides an overview of the Kronos
programming language, which is the focus of the
estudio actual.

Designing for Code Optimization

Idealmente, specification of a programming language
should be separated from its implementation,
delegating all concerns of time and space efficiency
to the compiler. En la práctica, this is not always
the case. In the section “The Constraints of the
Ugen Interpreter,” I proposed that concerns with
implementation efficiency encourage ugen design
that is detrimental to the language. A more widely
acknowledged example is the case of tail-call
optimization that many functional languages, semejante

as Scheme (Abelson et al. 1991), require. El
idiomatic programming style in Scheme relies on
the fact that the compiler can produce tail-recursive
functions that operate in constant space.

Because signal processing is a very narrow,
focused programming task, design for optimization
can be more radical.

The first assumption I propose is that multirate

techniques are essential for optimizing the effi-
ciency of signal processors. Most systems feature
a distinction between audio rate and control rate.
I propose that update rate should be considered to
be a task for the optimizing compiler. It should be
possible to use similar signal semantics for all the
signals in the system, from audio to control to MIDI
and the user interface, to enable an universal signal
modelo (Norilo and Laurson 2008a).

The second assumption is that for signal process-
ing we often desire a higher level of expressivity and
abstraction when describing the signal-processor
topology than during processing itself. This assump-
tion is supported by the fact that environments like
Faust, ChucK, and SuperCollider, among others,
divide the task of describing signal processors into
graph generation and actual processing. I propose
that this division be considered at the earliest stages
of language design, appropriately formalized, y
subsequently exploited in optimization.

The Dataflow Language

The starting point of the proposed design is a
dataflow language that is minimal in the sense of be-
ing very amenable to compiler optimization, but still
complete enough to represent the majority of typical
signal-processing tasks. For the building blocks,
we choose arithmetic and logic on elementary
data types, function application, and algebraic type
composition. The language will be represented

Norilo

35

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu
/
C
oh
metro

j
/

yo

a
r
t
i
C
mi

pag
d

F
/

/

/

/

3
9
4
3
0
1
9
5
3
7
4
9
/
C
oh
metro
_
a
_
0
0
3
3
0
pag
d

.

j

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

by a static signal graph, implying determinism
that is useful for analysis and optimization of the
equivalent machine code.

The nodes of the graph represent operators, y

the edges represent signal transfer. The graph is
functionally pure, which means that functions
cannot induce observable side effects. As in Faust,
delay and signal memory is included in the dataflow
language as a first-class operator. The compiler rei-
fies the delays as stateful operations. This restricted
use of state allows the language to be referentially
transparent (Strachey 2000) while providing efficient
delay operations: the user faces pure functions of
current and past inputs, while the machine executes
a streamlined ring-buffer operation.

The language semantics are completed by allow-
ing cycles in the signal flow, as long as each cycle
includes at least one sample of delay. This greatly
enhances the capability of the dataflow language to
express signal processors, as feedback-delay routing
is extremely commonplace. If the Kronos language is
represented textually, cyclic expressions result from
symbols defined recursively in terms of each other;
in visual form, the cycles are directly observable in
the program patch.

The deterministic execution semantics and refer-

ential transparency allow the compiler to perform
global dataflow analysis on entire programs. El
main use of this facility is automated factorization
of signal rates: The compiler can determine the
required update rates of each pure function in the
system by observing the update rates of its inputs.
Signal sources can be inserted into the dataflow
graph as external inputs. In compilation, estos
become the entry points that drive the graph com-
putation. One such entry point is the audio clock;
another could represent a slider in the user interface.
This factorization is one of the main contributions
of the Kronos project.

The dataflow principle behind what is described

here is classified by Peter Van Roy (2009) as a
discrete reactive system. It will respond to a well-
defined series of discrete input events with another
well-defined series of discrete output events, cual
is true for any Kronos program or fragment of
uno.

The Metalanguage

As the reader may observe, the dataflow language
as described here greatly resembles the result of a
function composition written in Faust. Such a static
signal graph is no doubt suitable for optimizing
compilers; sin embargo, it is not very practical for
a human programmer to write directly. As an
abstraction over the static signal graph, Faust offers
a block-diagram algebra and term rewriting (Gr ¨af
2010).

For Kronos, I propose an alternative that I argue
is both simpler and more comprehensive. En cambio
of the dataflow language, the programmer works
in a metalanguage. The main abstraction offered
by the metalanguage is polymorphic function
application, implemented as System Fω (Barendregt
1991): The behavior and result type of a function
are a function of argument type, notably, argumento
value is not permitted to influence the result type.
The application of polymorphic functions is guided
by type constraints—the algebraic structure and
semantic notation of signals can be used to drive
function selection. This notion is very abstract; a
better explain it, in the section “Case Studies” I
show how it can encode a block-diagram algebra,
algorithmic routing, and techniques of generic
programming.

En esencia, the metalanguage operates on types
and the dataflow language operates on values. El
metalanguage is used to construct a statically typed
dataflow graph. The restrictions of System Fω ensure
that the complexity of functional polymorphism can
be completely eliminated when moving from the
metalanguage to the dataflow language; any such
complexity is in the type-system computations at
compile time—it is essentially free during the crit-
ical real-time processing of data flow. This enables
the programmer to fully exploit very complicated
polymorphic abstractions in signal processing, con
a performance similar to low-level C, aunque con
considerable restrictions.

Because values do not influence types, depen-

dent types cannot be expressed. As type-based
polymorphism is the main control-flow mecha-
nism, runtime values are, en efecto, shut out from
influencing program flow.

36

Computer Music Journal

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu
/
C
oh
metro

j
/

yo

a
r
t
i
C
mi

pag
d

F
/

/

/

/

3
9
4
3
0
1
9
5
3
7
4
9
/
C
oh
metro
_
a
_
0
0
3
3
0
pag
d

.

j

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Mesa 2. Kronos Language Features

Polymorphism, Estado, and Cyclic Graphs

Paradigm
Evaluation
Typing discipline
Compilation
Usage
Backend
Platforms
License
Repository

Funcional
Strict
Static, strong, derived
Static, just in time
Library, repl, command line
LLVM (Lattner and Adve 2004)
Windows, Mac OSX, Linux
GPL3
https://bitbucket.org/vnorilo/k3

This restriction may seem crippling to a pro-
grammer experienced in general-purpose languages,
although a static signal graph is a feature in many
successful signal-processing systems, including Pure
Data and Faust.

I argue that the System Fω is an apposite formal-

ization for the division of signal processing into
graph generation and processing. It cleanly separates
the high-level metaprogramming layer and low-level
signal-processing layer into two distinct realms: eso
of types, and that of values.

A summary of the characteristics of the Kronos

language is shown in Table 2. For a detailed dis-
cussion of the theory, the reader is referred to prior
publicaciones (Norilo 2011b, 2013).

Case Studies

This section aims to demonstrate the programming
model described in the earlier section “Towards
Higher-Level Signal Processing,” via case studies
selected for each particular aspect of the design.
The examples are not designed to be revolutionary;
bastante, they are selected as a range of representative
classic problems in signal processing. I wish to stress
that the examples are designed to be self-contained.
Although any sustainable programming practice
relies on reusable components, the examples here
strive to demonstrate how proper signal-processing
modules can be devised relatively easily from ex-
tremely low-level primitives, without an extensive
support library, as long as the language provides
adequate facilities for abstraction.

A simple one-pole filter implemented in Kronos
se muestra en la figura 2. This example demonstrates
elementary type computations, delay reification,
and cyclic signal paths. The notable details occur in
the unit-delay operator z-1. This operator receives
two parameters, a forward initialization path and
the actual signal path.

Notablemente, the signal path in this example is cyclic.

This is evident in how the definitions of y0 and
y1 are mutually recursive. Please note that y1 is
not a variable or a memory location: It is a symbol
bound to a specific node in the signal graph. Kronos
permits cyclic graphs, as long as they feature a delay
operator. The mapping of this cycle to efficient
machine code is the responsibility of the dataflow
compiler, which produces a set of assignment side
effects that fulfill the desired semantics.

The forward initialization path is used to describe
the implicit history of the delay operator before any
input has been received. Instead of being expressed
directly as a numerical constant, the value is derived
from the pole parameter. This causes the data
type of the delay path to match that of the pole
parameter. If the user chooses to utilize double
precision for the pole parameter, the internal data
paths of the filter are automatically instantiated in
double precision. The input might still be in single
precisión; by default, the runtime library would
inject a type upgrade into the difference equation.
The upgrade semantics are based on functional
polymorphism, and they are defined in source form
instead of being built into the compiler.

User-defined types can also be used for the
pole parameter, provided that suitable multipli-
cation and subtraction operators and implicit type
conversions exist. The runtime library provides a
complex number implementation (de nuevo, in source
forma) that provides basic arithmetic and speci-
fies an implicit type coercion from real values: si
used for the pole, the filter becomes a complex
resonator with complex-valued output. This type-
based polymorphism can be seen as a generalization
of ugen parameterization—for example, the way
SuperCollider ugens can adapt to incoming channel
cuenta.

Norilo

37

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu
/
C
oh
metro

j
/

yo

a
r
t
i
C
mi

pag
d

F
/

/

/

/

3
9
4
3
0
1
9
5
3
7
4
9
/
C
oh
metro
_
a
_
0
0
3
3
0
pag
d

.

j

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Cifra 2. One-pole filter.

Cifra 3. Biquad filter,
based on the realization
known as Direct Form II
(Herrero 2007).

Filter(x0 pole) {

y1 = z-1( (polepole) y0 )
i n i t i a l l y zero , subsequently delayed y0 .
; y1 i s
; The i n i t i a l value o f zero i s expressed as ’ pole − pole ’
;

that the feedback path type matches the pole type .

to ensure

y0 = x0pole * y1
; Compute y0 ,

La salida .

Filter = y0
; This i s the f i l t e r output .

}

example1 = F i l t e r ( s i g 0 . 5 )

; S t r a i g h t f o r w a r d s i n g l e −p r e c i s i o n one−pole f i l t e r :
;
; Upgrade the s i g n a l path to double p r e c i s i o n :
;

example2 = F i l t e r ( s i g 0 . 5d )

; Use as a r e s o n a t o r via a complex pole and reduction to r e a l p a r t .
Resonator(sig w radius) {

’ Real ’ and ’ Polar ’ are f u n c t i o n s in namespace ’ Complex ’

;
Resonator = Complex:Real(Filter(sig Complex:Polar(w radius)))

}

Cifra 2

For a more straightforward implementation, el

complicated type computations can be ignored.
Declaring the unit delay as y1 = z-1(0 y0) would
yield a filter that was fixed to single-precision
floating point; different types for the input signal
or the pole would result in a type error at the z-1
operator. This approach is likely more suitable for
beginning programmers, although they should have
little difficulty in using (as opposed to coding) el
generic version.

An implementation of a biquad filter is shown in
Cifra 3. This filter is identical to the Faust version
En figura 1. Because Kronos operates on signal values
instead of block diagrams, the syntax tree of this
implementation is identical to the signal flow graph.
This is arguably easier to understand than the Faust
versión.

Higher-Order Functions in Signal Processing

Polymorphism is a means of describing something
more general than a particular filter implementation.

Biquad(sig b0 b1 b2 a1 a2) {

zero = sigsig

feedback s e c t i o n

;
y0 = sig – y1 * a1y2 * a2
y1 = z-1(zero y0)
y2 = z-1(zero y1)

feedforward s e c t i o n

;
Biquad = y0 * b0 + y1 * b1 + y2 * b2

}

Cifra 3

Por ejemplo, the previous example described the
principle of unit-delay recursion through a feedback
coefficient with different abstract types.

An even more fundamental principle underlies
this filter model and several other audio processes:
That of recursive composition of unit delays. Este
can be described in terms of a binary function of
the feedforward and feedback signals into an output
señal.

38

Computer Music Journal

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu
/
C
oh
metro

j
/

yo

a
r
t
i
C
mi

pag
d

F
/

/

/

/

3
9
4
3
0
1
9
5
3
7
4
9
/
C
oh
metro
_
a
_
0
0
3
3
0
pag
d

.

j

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Cifra 4. Generic
recursion.

; Routes a function output back to i t s
Recursive(sig binary-func) {

f i r s t argument through a unit delay .

state = binary-func(z-1(sigsig state) sig)
Recursive = state

}

Filter2(sig pole) {

the arguments
; Lambda arrow ’= > ’ c o n s t r u c t s an anonymous function :
; are on the l e f t hand side , and the body i s on the r i g h t hand s i d e .
dif-eq = (y1 x0) => x0pole * y1
; onepole f i l t e r
Filter2 = Recursive( sig dif-eq )

i s a r e c u r s i v e composition o f a simple multiply−add e x p r e s s i o n .

}

Buzzer(freq) {
; Local
wrap = x => xFloor(X)
; Compose a buzzer from a r e c u r s i v e l y composed increment wrap .
Buzzer = Recursive( freq (state freq) =>

function to wrap the phasor .

wrap(estado + Frequency-Coefficient(freq Audio:Signal(0))) )

}

; example usage
; F i l t e r 2 ( Buzzer ( 4 4 0 ) 0 . 5 )

Recursive Routing Metafunction

In Kronos, the presence of first-class functions—
or functions as signals—allows for higher-order
funciones. Such a function can be designed to wrap a
suitable binary function in a recursive composition
as previously described. The implementation of
this metafunction is given in Figure 4, junto con
example usage to reconstruct the filter from Figure 2
as well as a simple phasor, used here as a naive
sawtooth oscillator. This demonstrates how to
implement a composition operator, such as those
built into Faust, by utilizing higher-order functions.
The recursive composition function is an example
of algorithmic routing. It is a function that generates
signal graphs according to a generally useful routing
principle. Además, parallel and serial routings
are ubiquitous, and well suited for expression in the
functional style.

Schroeder Reverberator

Schroeder reverberation is a classic example of
a signal-processing problem combining parallel
and serial routing (Schroeder 1969). An example

implementation is given in Figure 5 junto con
routing metafunctions, Map and Fold. Complete
implementations are shown for demonstration
purposes—the functions are included in source form
within the runtime library.

Further examples of advanced reverberators
written in Kronos can be found in an earlier paper
by the author (Norilo 2011a).

Sinusoid Waveshaper

Metaprogramming can be applied to implement
reconfigurable signal processors. Consider a poly-
nomial sinusoid waveshaper; different levels of
precision are required for different applications.
Cifra 6 demonstrates a routine that can generate a
polynomial of any order in the type system.

En resumen, the functional paradigm enables
abstraction and generalization of various signal-
processing principles such as the routing algorithms
descrito anteriormente. The application of first-class
functions allows flexible program composition at
compile time without a negative impact on runtime
actuación.

Norilo

39

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu
/
C
oh
metro

j
/

yo

a
r
t
i
C
mi

pag
d

F
/

/

/

/

3
9
4
3
0
1
9
5
3
7
4
9
/
C
oh
metro
_
a
_
0
0
3
3
0
pag
d

.

j

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Cifra 5. Algorithmic
routing.

Fold(func data) {

; E x t r a c t two elements and the t a i l
(x1 x2 xs) = data
;
i s empty ,
; otherwise f o l d ’ x1 ’ and ’ x2 ’
Fold = Nil?(xs) : func(x1 x2)

r e s u l t

t a i l

i s

I f

from the l i s t .

’ func ( x1 x2 ) '
i n t o a new l i s t head and r e c u r s i v e l y c a l l

Fold(func func(x1 x2) xs)

}

; P a r a l l e l
Map(func data) {

r o u t i n g i s a f u n c t i o n a l map.

r e t u r n an empty l i s t .

; For an empty l i s t ,
Map = When(Nil?(datos) datos)
; Otherwise s p l i t
(x xs) = data
; apply mapping function to head , and r e c u r s i v e l y c a l l
Map = (func(X) Map(func xs))

to head and t a i l ,

the l i s t

función .

}

; Simple comb f i l t e r .
Comb(sig feedback delay) {

out = rbuf(sigsig delay sig + comentario * afuera)
Comb = out

}

; Allpass comb f i l t e r .
Allpass-Comb(sig feedback delay) {

vd = rbuf(sigsig delay v)
v = sig – comentario * vd
Allpass-Comb = feedback * v + vd

}

Reverb(sig rt60) {

; L i s t o f comb f i l t e r delay times f o r 44.1 kHz .
delays = [ #1687 #1601 #2053 #2251 ]
; Compute r t 6 0 in samples .
rt60smp = Rate-of( sig ) * rt60
; A comb f i l t e r with the feedback c o e f f i c i e n t derived from delay time .
rvcomb = delay => Comb(sig Math:Pow( 0.001 delay / rt60smp ) delay)
; Comb f i l t e r bank and sum from the l i s t o f delay times .
combs-sum = Fold( (+) Map( rvcomb delays ) )
; Cascaded a l l p a s s
Reverb = Fold( Allpass-Comb [combs-sum (0.7 #347) (0.7 #113) (0.7 #41)] )

f i l t e r s as a f o l d .

}

Multirate Processing: FFT

Fast Fourier transform (FFT)–based spectral analysis
is a good example of a multirate process. The signal

is transformed from an audio-rate sample stream
to a much slower and wider stream of spectrum
marcos. Such buffered processes can be expressed
as signal-rate decimation on the contents of ring

40

Computer Music Journal

función .

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu
/
C
oh
metro

j
/

yo

a
r
t
i
C
mi

pag
d

F
/

/

/

/

3
9
4
3
0
1
9
5
3
7
4
9
/
C
oh
metro
_
a
_
0
0
3
3
0
pag
d

.

j

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Cifra 6. Sinusoid
waveshaper.

Horner-Scheme(x coefficients) {

Horner-Scheme = Fold((a b) => a + X * b coefficients)

}

Pi = #3.14159265359

Cosine-Coefs(orden) {

; Generate next exp ( X ) c o e f f i c i e n t
exp-iter = (index num denom) => (

from the previous one .

índice + #1
num * #2 * Pi
denom * índice)

; next c o e f f i c i e n t
; next numerator
; next denominator

índice

flip-sign = (index num denom) => (index Neg(num) denom)
; Generate next cos ( p i w) c o e f f i c i e n t
sine-iter = x => flip-sign(exp-iter(exp-iter(X)))
; Generate ’ order ’ c o e f f i c i e n t s .
Cosine-Coefs = Algorithm:Map(

from the previous one .

(index num denom) => (num / denom)
Algoritmo:Expand(order sine-iter (#2 #-2 * Pi #1)))

}

Cosine-Shape(x order) {
x1 = x – #0.25
Cosine-Shape = x1 * Horner-Scheme(x1 * x1 Cosine-Coefs(orden))

}

buffers, with subsequent transformations. Cifra 7
demonstrates a spectral analyzer written in Kronos.
Por simplicidad, algorithmic optimization for real-
valued signals has been omitted. The FFT, a pesar de
the high-level expression, performs similarly to a
simple nonrecursive C implementation. It cannot
compete with state-of-the-art FFTs, sin embargo.

Because the result of the analyzer is a signal con-
sisting of FFT frames at a fraction of the audio rate,
the construction of algorithms such as overlap-add
convolution or FFT filtering is easy to accomplish.

Polyphonic Synthesizer

The final example is a simple polyphonic FM
synthesizer equipped with a voice allocator, mostrado
En figura 8. This is intended as a demonstration of
how the signal model and programming paradigm
can scale from efficient low-level implementations
upwards to higher-level tasks.

The voice allocator is modeled as a ugen receiving

a stream of MIDI data and producing a vector of

voices, in which each voice is represented by a
MIDI note number, a gate signal, and a “voice age”
counter. The allocator is a unit-delay recursion
around the vector of voices, utilizing combinatory
logic to lower the gate signals for any released
keys and insert newly pressed keys in place of the
least important of the current voices. The allocator
is driven by the MIDI signal, so each incoming
MIDI event causes the voice vector to update. Este
functionality depends on the compiler to deduce
data flows and provide unit-delay recursion on the
MIDI stream.

To demonstrate the multirate capabilities of
Kronos, the example features a low-frequency
oscillator (LFO) shared by all the voices. This LFO
is just another FM operator, but its update rate
is downsampled by a factor of krate. The LFO
modulates the frequencies logarithmically. Este
is contrived, but should demonstrate the effect
of compiler optimization of update rates, desde
an expensive power function is required for each
frequency computation. Mesa 3 displays three

Norilo

41

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu
/
C
oh
metro

j
/

yo

a
r
t
i
C
mi

pag
d

F
/

/

/

/

3
9
4
3
0
1
9
5
3
7
4
9
/
C
oh
metro
_
a
_
0
0
3
3
0
pag
d

.

j

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Cifra 7. Spectrum
analyzer.

Stride-2(Xs) {

; Remove a l l elements o f Xs with odd i n d i c e s .
Stride-2 = []
Stride-2 = When(Nil?(Rest(Xs)) [Primero(Xs)])
(x1 x2 xs) = Xs
Stride-2 = (x1 Recur(xs))

}

Cooley-Tukey(dir Xs) {
Use Algorithm
norte
sub = ’Cooley-Tukey(dir _)
even = sub(Stride-2(Xs))
odd = sub(Stride-2(Rest(Xs)))

= Arity(Xs)

; FFT s i z e

; compute even sub−FFT
; compute odd sub−FFT

; Compute the twiddle f a c t o r
twiddle-factor = Complex:Polar((dir * Matemáticas:Pi / norte) * #2 #1) * 1
to the odd sub−FFT .
; Apply twiddle f a c t o r
twiddled = Zip-With(Mul odd Expand(norte / #2 (* twiddle-factor) Complex:Cons(1 0)))

f o r

radix −2 FFT .

(x1 x2 _) = Xs

Cooley-Tukey =

norte < #1 : Raise("Cooley-Tukey FFT requires a power-of-two array input") N == #1 : [First(Xs)] ; terminate FFT r e c u r s i o n ; Recursively c a l l function and recombine sub−FFT r e s u l t s . Concat( Zip-With(Add even twiddled) Zip-With(Sub even twiddled)) } Analyzer(sig N overlap) { ’N’ frames in a b u f f e r . ; Gather (buf i out) = rcsbuf(0 N sig) ; Reduce sample r a t e o f frame = Reactive:Downsample(buf N / overlap) ; Compute forward FFT on each a n a l y s i s frame . Analyzer = Cooley-Tukey(#1 frame) ’ buf ’ by f a c t o r o f (N / overlap ) r e l a t i v e to ’ s i g ’ . } benchmarks of the example listing with different control rate settings on an Intel Core i7-4500U at 2.4GHz. With control rate equaling audio rate, the synthesizer is twice as expensive to compute as with a control rate set to 8. The benefit of lowering the control rate becomes marginal after about 32. This demonstrates the ability of the compiler to deduce data flows and eliminate redundant computation— note that the only change was to the downsampling factor of the LFO. Discussion In this section, I discuss Kronos in relation to prior work and initial user reception. Potential future work is also identified. 42 Computer Music Journal l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / c o m j / l a r t i c e - p d f / / / / 3 9 4 3 0 1 9 5 3 7 4 9 / c o m _ a _ 0 0 3 3 0 p d . j f b y g u e s t t o n 0 7 S e p e m b e r 2 0 2 3 Figure 8. Polyphonic synthesizer (continued on next page). Package Polyphonic { Prioritize-Held-Notes(midi-bytes voices) { choose = Control-Logic:Choose (status note-number velocity) = midi-bytes ; K i l l note number i f event kill-key = choose(status == 0x80 | (status == 0x90 & velocity == 0i) i s note o f f or note on with zero v e l o c i t y . note-number -1i) i s note on and has nonzero v e l o c i t y . ; New note number i f event is-note-on = (status == 0x90 & velocity > 0i)
; A constant s p e c i f y i n g h i g h e s t p o s s i b l e p r i o r i t y value .
max-priority = 2147483647i
; Lower gate and reduce p r i o r i t y f o r
with-noteoff = Map((p k v) => (pag – (max-priority & (k == kill-key))

r e l e a s e d voice .

k
v & (k != kill-key))
voices)

; Find o l d e s t voice by s e l e c t i n g lowest p r i o r i t y .
lowest-priority = Fold(Min Map(First voices))
;
Prioritize-Held-Notes =

I n s e r t new note .

Map((p k v) => choose((p == lowest-priority) & is-note-on
(max-priority note-number velocity)
(pag – 1i k v))

with-noteoff)

}

Allocator(num-voices allocator midi-bytes) {

; Create i n i t i a l voice a l l o c a t i o n with running p r i o r i t i e s so that the a l l o c a t o r
; always s e e s e x a c t l y one voice as the o l d e s t voice .
voice-init = Algorithm:Expand(num-voices (p _ _) => (pag – 1i 0i 0i) (0i 0i 0i))
; Generate and clock the voice a l l o c a t o r
old-voices = z-1(voice-init Reactive:Resample(new-voices midi-bytes))
; Perform voice a l l o c a t i o n whenever the MIDI stream t i c k s .
new-voices = allocator(midi-bytes old-voices)
Allocator = new-voices

loop from the MIDI stream .

}

}

Kronos and Faust

Among existing programming environments, Faust
es, in principle, closest to Kronos. The environments
share the functional approach. Faust has novel block-
composition operands that are powerful but perhaps
a little foreign syntactically to many users. Kronos
emphasizes high-level semantic metaprogramming
for block composition.

Kronos programs deal with signal values, mientras

Faust programs deal with block diagrams. El

former have syntax trees that correspond one-
to-one with the signal flow, and the latter are
topologically very different. I argue that the cor-
respondence is an advantage, especially if a visual
patching environment is used (Norilo 2012). Si
desired, the Kronos syntax can encode block-
diagram algebra with higher-order functions,
down to custom infix operators. Faust can also
encode signal-flow topology by utilizing term
rewriting (Gr ¨af 2010), but only in the feedforward
caso.

Norilo

43

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu
/
C
oh
metro

j
/

yo

a
r
t
i
C
mi

pag
d

F
/

/

/

/

3
9
4
3
0
1
9
5
3
7
4
9
/
C
oh
metro
_
a
_
0
0
3
3
0
pag
d

.

j

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Cifra 8. Polyphonic
synthesizer (continued
from previous page).

; −−− S y n t h e s i z e r −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
FM-Op(freq amp) {

; apply sinusoid waveshaping to a sawtooth buzzer
FM-Op = amp * Approx:Cosine-Shape(Abs(Buzzer(freq) – 0.5) #5)

}

FM-Voice(freq gate) {

; attack and decay slew per sample
(slew+ slew-) = (0.003 -0.0001)
; upsample gate to audio r a t e
gate-sig = Audio:Signal(puerta)
; slew l i m i t e r as a r e c u r s i v e composition over c l i p p i n g the value d i f f e r e n t i a l
env = Recursive( gate-sig (old new) => old + máx.(slew- mín.(slew+ new – viejo)) )
; FM modulator osc
mod = FM-Op(freq freq * 8 * env)
; FM c a r r i e r osc
FM-Voice = FM-Op(freq + mod env)

}

; −−− Test bench −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
Synth(midi-bytes polyphony krate) {

transform MIDI stream i n t o a bank o f v o i c e s

;
voices = Polyphonic:Allocator( polyphony
Polyphonic:Prioritize-Held-Notes
midi-bytes )

lfo = Reactive:Downsample(FM-Op(5.5 1) krate)
; make a simple synth from the voice v e c t o r
Synth = Fold((+)

Map((age key gate) => FM-Voice(

440 * Matemáticas:Pow(2 (key – 69 + lfo * puerta / 256) / 12) ;
puerta / 128) ; amp

f r e q

}

voices))

Kronos is designed as a System Fω compiler,

complete with a multirate scheme capable of
handling event streams as well. The multirate
system in Faust (Jouvelot and Orlarey 2011) es un
recent addition and less general, supporting “slow”
signals that are evaluated once per block, audio
signals, y, more recently, arriba- and downsampled
versions of audio signals. The notion of an event
stream does not exist as of this writing.

The strengths of Faust include the variety of sup-
ported architectures (Fober, Orlarey, and Letz 2011),
generation of block-diagram graphics, symbolic
computation, and mathematical documentation.
The compiler has also been hardened with major

projects, such as a port of the Synthesis Toolkit
(Michon and Smith 2011).

Kronos and Imperative Programming

Poing Imperatif by Kjetil Matheussen (2011) es un
source-to-source compiler that is able to lower
object-oriented constructs into the Faust language.
Matheusen’s work can be seen as a study of iso-
morphisms between imperative programming and
Faust. Many of his findings apply directly to Kronos
también. Programs in both languages have state,
but it is provided as an abstract language construct

44

Computer Music Journal

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu
/
C
oh
metro

j
/

yo

a
r
t
i
C
mi

pag
d

F
/

/

/

/

3
9
4
3
0
1
9
5
3
7
4
9
/
C
oh
metro
_
a
_
0
0
3
3
0
pag
d

.

j

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Mesa 3. Impact of Update
Rate Optimization

krate μsec per 1,024 Samples

1
2
8
32
128

257
190
127
118
114

and reified by the compiler. Poing Imperatif low-
ers mutable fields in objects to feedback-delay
loops—constructs that represent such abstract state.
Esencialmente, a tick of a unit-delay signal loop is
equivalent to a procedural routine that reads and
writes an atom of program state. The key difference
from a general-purpose language is that Kronos and
Faust enforce locality of state—side effects cannot
be delegated to subroutines. Matheusen presents
a partial workaround: Subroutines describe side
effects rather than perform them, leaving the final
mutation to the scope of the state.

The reactive capabilities of Kronos present a new
aspect in the comparison with object-oriented pro-
gramming. Each external input to a Kronos program
has a respective update routine that modifies some
of the program state. The inputs are therefore anal-
ogous to object methods. A typical object-oriented
implementation of an audio filter would likely
include methods to set the high-level design param-
eters such as corner frequency and Q, and a method
for audio processing. The design-parameter interface
would update coefficients that are internal to the
filter, which the audio process then uses. Kronos
generates machine code that looks very similar to
this design. The implicitly generated memory for the
signal-clock boundaries contains the coefficients:
intermediate results of the signal path that depend
only on the design parameters.

At the source level, the object-oriented program

spells out the methods, signal caches, and delay
buffers. Kronos programs declare the signal flow,
leaving method factorization and buffer allocation to
the compiler. This is the gist of the tradeoff offered:
Kronos semantics are more narrowly defined,
allowing the programmer to concentrate exclusively

on signal flow. This is useful when the semantic
model suits the task at hand; but if it does not, el
language is not as flexible as a general-purpose one.

User Evaluation

I have been teaching the Kronos system for two
year-long courses at the University of Arts Helsinki,
as well as intensive periods in the Conservatory
of Cosenza and the National University of Ireland,
Maynooth. Además, I have collected feedback
from experts at international conferences and
colloquia, Por ejemplo, at the Institut de Recherche
et de Coordination Acoustique/Musique (IRCAM)
in Paris and the Center for Computer Research
in Music and Acoustics (CCRMA) at Stanford
Universidad.

Student Reception

The main content of my Kronos teaching consists of
using the visual patcher and just-in-time compiler
in building models of analog devices, in the design of
digital instruments, and in introducing concepts of
functional programming. The students are majors in
subjects such as recording arts or electronic music.
Students generally respond well to filter implemen-
tation, as the patches correspond very closely to
textbook diagrams. They respond well to the idea
of algorithmic routing, many expressing frustration
that it is not more widely available, but they struggle
to apply it by themselves. Many are helped by termi-
nology from modular synthesizers, such as calling
Map a bank and Reduce a cascade. During the longer
courses, students have implemented projects, semejante
as AudioUnit plug-ins and mobile sound-synthesis
applications.

Expert Reception

Among the expert audience, Kronos has attracted
the most positive response from engineers and
signal-processing researchers. Composers seem to
be less interested in the problem domain it tackles.
Many experts have considered the Kronos syntax to
be easy to follow and intuitive, and its compilation

Norilo

45

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu
/
C
oh
metro

j
/

yo

a
r
t
i
C
mi

pag
d

F
/

/

/

/

3
9
4
3
0
1
9
5
3
7
4
9
/
C
oh
metro
_
a
_
0
0
3
3
0
pag
d

.

j

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

speed and performance to be excellent. A common
doubt is with regard to the capability of a static
dataflow graph to express an adequate number of
algoritmos. Adaptation of various algorithms to the
model is indeed an ongoing research effort.

Recientemente, a significant synergy was discovered

between the Kronos dataflow language and the
WaveCore, a multicore DSP chip designed by
Verstraelen, Kuper, and Smit (2014). The dataflow
language closely matches the declarative WaveCore
idioma, and a collaborative effort is ongoing to
develop Kronos as a high-level language for the
WaveCore chip.

Current State

Source code and release files for the Kronos compiler
are available at https://bitbucket.org/vnorilo/k3.
The code has been tested on Windows 8, Mac OS X
10.9, and Ubuntu Linux 14, for which precompiled
binaries are available. The repository includes
the code examples shown in this article. Ambos
the compiler and the runtime library are publicly
licensed under the GNU General Public License,
Versión 3.

The status of the compiler is experimental.
The correctness of the compiler is under ongoing
verification and improvement by means of a test
suite that exercises a growing subset of possible use
casos. The examples presented in this article are a
part of the test suite.

Trabajo futuro

Finalmente, I discuss the potential for future research.
The visual front end is especially interesting in the
context of teaching and learning signal processing,
and core language enhancements could further
extend the range of musical programming tasks
Kronos is able to solve well.

Visual Programming and Learnability

Kronos is designed from the ground up to be
adaptable to visual programming. Además de

the core technology, supporting tools must be built
to truly enable it. The current patcher prototype
includes some novel ideas for making textual and
visual programming equally powerful (Norilo 2012).
Instantaneous visual feedback in program debug-
ging, inspection of signal flow, and instrumentation
are areas where interesting research could be carried
afuera. Such facilities would enhance the system’s
suitability for pedagogical use.

Core Language Enhancements

Type determinism (as per System Fω) and early
binding are key to efficient processing in Kronos.
It is acknowledged, sin embargo, that they form a
severe restriction on the expressive capability of the
dataflow language.

Csound is a well-known example of an envi-
ronment where notes in a score and instances of
signal processors correspond. For each note, una señal
processor is instantiated for the required duration.
This model cleanly associates the musical object
with the program object.

Such a model is not immediately available in
Kronos. The native idiom for dynamic polyphony
would be to generate a signal graph for the maximum
number of voices and utilize a dynamic clock to
shut down voices to save processing time. Este
is not as neat as the dynamic allocation model,
because it forces the user to specify a maximum
polyphony.

More generally, approaches to time-variant pro-
cesses on the level of the musical score are inter-
esting; works such as Eli Brandt’s (2002) Chronic
offer ideas on how to integrate time variance and
the paradigm of functional programming. Dynamic
mutation could be introduced into the dataflow
graph by utilizing techniques from class-based
polymorphic languages, such as type erasure on
closures.

In its current state, Kronos does not aim to
replace high-level composition systems such as
Csound or Nyquist (Dannenberg 1997). It aims to
implement the bottom of the signal-processing stack
Bueno, and thus could be a complement to a system
operating on a higher ladder of abstraction. Both of
the aforementioned systems could, Por ejemplo, ser

46

Computer Music Journal

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu
/
C
oh
metro

j
/

yo

a
r
t
i
C
mi

pag
d

F
/

/

/

/

3
9
4
3
0
1
9
5
3
7
4
9
/
C
oh
metro
_
a
_
0
0
3
3
0
pag
d

.

j

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

extended to drive the Kronos just-in-time compiler
for their signal-processing needs.

Boulanger, R. 2000. The Csound Book. Cambridge,

Massachusetts: CON prensa.

Conclusión

This article has presented Kronos, a language
and a compiler suite designed for musical signal
Procesando. Its design criteria are informed by the
requirements of real-time signal processing fused
with a representation on a high conceptual level.
Some novel design decisions enabled by the
DSP focus are whole-program type derivation and
compile-time computation. These features aim to
offer a simple, learnable syntax while providing
extremely high performance. Además, the ideas
of ugen parameterization and block-diagram algebra
were generalized and described in the terms of types
in the System Fω. Abstract representation of state via
signal delays and recursion bridges the gap between
pure functions and stateful ugens.

All signals are represented by a universal signal
modelo. The system allows the user to treat events,
control, and audio signals with unified semantics,
with the compiler providing update-rate optimiza-
ciones. The resulting machine code closely resembles
that produced by typical object-oriented strategies
for lower-level languages, while offering a very
high-level dataflow-based programming model on
the source level. Tal como, the work can be seen as
a study of formalizing a certain set of programming
practices for real-time signal-processing code, y
providing a higher-level abstraction that conforms
to them. The resulting source code representation
is significantly more compact and focused on the
essential signal flow—provided that the problem at
hand can be adapted to the paradigm.

Referencias

Abelson, h., et al. 1991. “Revised Report on the Algorith-
mic Language Scheme.” ACM SIGPLAN Lisp Pointers
4(3):1–55.

Barendregt, h. 1991. “Introduction to Generalized Type

Systems.” Journal of Functional Programming 1(2):124–
154.

Brandt, mi. 2002. “Temporal Type Constructors for

Computer Music Programming.” PhD disssertation,
Carnegie Mellon University, School of Computer
Ciencia.

Dannenberg, R. B. 1997. “The Implementation of Nyquist,
a Sound Synthesis Language.” Computer Music Journal
21(3):71–82.

Ertl, METRO. A., y D. Gregg. 2007. “Optimizing Indirect

Branch Prediction Accuracy in Virtual Machine Inter-
preters.” ACM TOPLAS Notices 29(6):37.

Fober, D., Y. Orlarey, and S. Letz. 2011. “Faust Architec-
tures Design and OSC Support.” In Proceedings of the
International Conference on Digital Audio Effects, páginas.
213–216.

Gr ¨af, A. 2010. “Term Rewriting Extensions for the Faust
Programming Language.” In Proceedings of the Linux
Audio Conference, páginas. 117–122.

Jouvelot, PAG., and Y. Orlarey. 2011. “Dependent Vector

Types for Data Structuring in Multirate Faust.” Com-
puter Languages, Systems and Structures 37(3):113–131.

kim, h., et al. 2009. “Virtual Program Counter (VPC)

Prediction: Very Low Cost Indirect Branch Prediction
Using Conditional Branch Prediction Hardware.” IEEE
Transactions on Computers 58(9):1153–1170.

Lattner, C., and V. Adve. 2004. “LLVM: A Compila-
tion Framework for Lifelong Program Analysis and
Transformation.” International Symposium on Code
Generation and Optimization 57(C):75–86.

Laurson, METRO., and V. Norilo. 2006. “From Score-Based

Approach towards Real-Time Control in PWGLSynth.”
In Proceedings of the International Computer Music
Conferencia, páginas. 29–32.

Matheussen, k. 2011. “Poing Imp ´eratif: Compiling

Imperative and Object Oriented Code to Faust.” In
Proceedings of the Linux Audio Conference, páginas. 55–60.

McCartney, j. 2002. “Rethinking the Computer Music
Idioma: SuperCollider.” Computer Music Journal
26(4):61–68.

Michon, r., y j. oh. Herrero. 2011. “Faust-STK: A Set
of Linear and Nonlinear Physical Models for the
Faust Programming Language.” In Proceedings of the
International Conference on Digital Audio Effects, páginas.
199–204.

Norilo, V. 2011a. “Designing Synthetic Reverberators in

Kronos.” In Proceedings of the International Computer
Music Conference, páginas. 96–99.

Norilo, V. 2011b. “Introducing Kronos: A Novel Approach
to Signal Processing Languages.” In Proceedings of the
Linux Audio Conference, páginas. 9-dieciséis.

Norilo

47

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu
/
C
oh
metro

j
/

yo

a
r
t
i
C
mi

pag
d

F
/

/

/

/

3
9
4
3
0
1
9
5
3
7
4
9
/
C
oh
metro
_
a
_
0
0
3
3
0
pag
d

.

j

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Norilo, V. 2012. “Visualization of Signals and Algo-
rithms in Kronos.” In Proceedings of the Interna-
tional Conference on Digital Audio Effects, páginas. 15–
18.

Norilo, V. 2013. “Recent Developments in the Kronos
Programming Language.” In Proceedings of the In-
ternational Computer Music Conference, páginas. 299–
304.

Norilo, v., y M. Laurson. 2008a. “A Unified Model
for Audio and Control Signals in PWGLSynth.” In
Proceedings of the International Computer Music
Conferencia, páginas. 13-dieciséis.

Norilo, v., y M. Laurson. 2008b. “Audio Analy-

sis in PWGLSynth.” In Proceedings of the Interna-
tional Conference on Digital Audio Effects, páginas. 47–
50.

Orlarey, y., D. Fober, and S. Letz. 2009. “Faust: Un

Efficient Functional Approach to DSP Programming.”
In G. Assayag and A. Gerszo, eds. New Computational
Paradigms for Music. París: Delatour, IRCAM, páginas.
65–97.

Ousterhout, j. k. 1998. “Scripting: Higher-Level Pro-

gramming for the 21st Century.” Computer 31(3):23–
30.

Puckette, METRO. 1988. “The Patcher.” In Proceedings of

International Computer Music Conference, páginas. 420–
429.

Puckette, METRO. 1996. “Pure Data: Another Integrated

Computer Music Environment.” In Proceedings of
the International Computer Music Conference, páginas.
269–272.

Roads, C. 1996. The Computer Music Tutorial. Cambridge,

Massachusetts: CON prensa.

Schottstaedt, B. 1994. “Machine Tongues XVII: CLM;
Music V Meets Common Lisp.” Computer Music
Diario 18:30–37.

Schroeder, METRO. R. 1969. “Digital Simulation of Sound

Transmission in Reverberant Spaces.” Journal of the
Acoustical Society of America 45(1):303.

Herrero, j. oh. 2007. Introduction to Digital Filters with
Audio Applications. Palo Alto, California: W3K.

Sorensen, A., and H. jardinero. 2010. “Programming with
Time: Cyber-Physical Programming with Impromptu.”
In Proceedings of the ACM International Conference
on Object-Oriented Programming Systems Languages,
y aplicaciones, páginas. 822–834.

Strachey, C. 2000. “Fundamental Concepts in Pro-

gramming Languages.” Higher-Order and Symbolic
Cálculo 13(1-2):11–49.

Van Roy, PAG. 2009. “Programming Paradigms for Dummies:
What Every Programmer Should Know.” In G. Assayag
y un. Gerzso, eds. New Computational Paradigms for
Music. París: Delatour, IRCAM, páginas. 9–49.

Verstraelen, METRO., j. Kuper, y G. j. METRO. Smit. 2014.
“Declaratively Programmable Ultra-Low Latency
Audio Effects Processing on FPGA.” In Proceedings of
the International Conference on Digital Audio Effects,
páginas. 263–270.

Wang, GRAMO., y P. R. Cocinar. 2003. “ChucK: A Concurrent,
On-the-Fly, Audio Programming Language.” In Proceed-
ings of the International Computer Music Conference,
páginas. 1–8.

Wang, GRAMO., R. Fiebrink, y P. R. Cocinar. 2007. “Combining
Analysis and Synthesis in the ChucK Programming Lan-
guage.” In Proceedings of the International Computer
Music Conference, páginas. 35–42.

48

Computer Music Journal

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu
/
C
oh
metro

j
/

yo

a
r
t
i
C
mi

pag
d

F
/

/

/

/

3
9
4
3
0
1
9
5
3
7
4
9
/
C
oh
metro
_
a
_
0
0
3
3
0
pag
d

.

j

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3
Descargar PDF