FOKUS-FUNKTION:
Konnektivität, Cognition, and Consciousness
It’s about time: Linking dynamical systems
with human neuroimaging to
understand the brain
Yohan J. John1, Kayle S. Sawyer2,3,4,5, Karthik Srinivasan6, Eli J. Müller7,
Brandon R. Munn7, and James M. Shine7
1Neural Systems Laboratory, Department of Health Sciences, Boston University, Boston, MA, USA
2Departments of Anatomy and Neurobiology, Boston University, Boston University, Boston, MA, USA
3Department of Radiology, Massachusetts General Hospital, Boston, MA, USA
4Boston VA Healthcare System, Boston, MA, USA
5Sawyer Scientific, LLC, Boston, MA, USA
6McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
7Brain and Mind Center, University of Sydney, Sydney, NSW, Australia
Schlüsselwörter: fMRT, Dynamics, Attractor landscapes, Neurowissenschaften, Bifurcations
ABSTRAKT
Most human neuroscience research to date has focused on statistical approaches that describe
stationary patterns of localized neural activity or blood flow. While these patterns are often
interpreted in light of dynamic, information-processing concepts, the static, local, and inferential
nature of the statistical approach makes it challenging to directly link neuroimaging results
to plausible underlying neural mechanisms. Hier, we argue that dynamical systems theory
provides the crucial mechanistic framework for characterizing both the brain’s time-varying
quality and its partial stability in the face of perturbations, and hence, that this perspective
can have a profound impact on the interpretation of human neuroimaging results and their
relationship with behavior. After briefly reviewing some key terminology, we identify three
key ways in which neuroimaging analyses can embrace a dynamical systems perspective:
by shifting from a local to a more global perspective, by focusing on dynamics instead of static
snapshots of neural activity, and by embracing modeling approaches that map neural dynamics
using “forward” models. Through this approach, we envisage ample opportunities for
neuroimaging researchers to enrich their understanding of the dynamic neural mechanisms that
support a wide array of brain functions, both in health and in the setting of psychopathology.
ZUSAMMENFASSUNG DES AUTORS
The study of dynamical systems offers a powerful framework for interpreting neuroimaging
data from a range of different contexts, Jedoch, as a field, we have yet to fully embrace the
power of this approach. Hier, we offer a brief overview of some key terms from the dynamical
systems literature, and then highlight three ways in which neuroimaging studies can begin to
embrace the dynamical systems approach: by shifting from local to global descriptions of
Aktivität, by moving from static to dynamic analyses, and by transitioning from descriptive to
generative models of neural activity patterns.
Keine offenen Zugänge
Tagebuch
Zitat: John, Y. J., Sawyer, K. S.,
Srinivasan, K., Müller, E. J., Munn, B. R.,
& Shine, J. M. (2022). It’s about time:
Linking dynamical systems with human
neuroimaging to understand the brain.
Netzwerkneurowissenschaften, 6(4), 960–979.
https://doi.org/10.1162/netn_a_00230
DOI:
https://doi.org/10.1162/netn_a_00230
Erhalten: 30 September 2021
Akzeptiert: 4 Januar 2022
Konkurrierende Interessen: Die Autoren haben
erklärte, dass keine konkurrierenden Interessen bestehen
existieren.
Korrespondierender Autor:
James M. Shine
mac.shine@sydney.edu.au
Handling-Editor:
Randy McIntosh
Urheberrechte ©: © 2022
Massachusetts Institute of Technology
Veröffentlicht unter Creative Commons
Namensnennung 4.0 International
(CC BY 4.0) Lizenz
Die MIT-Presse
l
D
Ö
w
N
Ö
A
D
e
D
F
R
Ö
M
H
T
T
P
:
/
/
D
ich
R
e
C
T
.
M
ich
T
.
/
/
T
e
D
u
N
e
N
A
R
T
ich
C
e
–
P
D
l
F
/
/
/
/
/
6
4
9
6
0
2
0
5
6
2
5
5
N
e
N
_
A
_
0
0
2
3
0
P
D
T
.
F
B
j
G
u
e
S
T
T
Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3
Dynamical systems theory and human neuroimaging
EINFÜHRUNG
Making sense of the inner workings of the human brain is a daunting task. Whole-brain neu-
roimaging represents a crucial device for reducing our uncertainty about how the brain works.
But what if the assumptions inherent within traditional neuroimaging analyses have us on the
wrong track? In many ways, neuroscience is relatively preparadigmatic (Kuhn, 1962), akin to
the field of biology before the insights of Charles Darwin, or chemistry before atomic theory.
With this in mind, how then should we approach modeling the brain? We suggest that a
dynamical systems perspective provides a path for scientists to break out of the piecemeal
progress circumscribed by traditional, static data-fitting statistical procedures. This modeling
approach is also ideally suited to mechanistic accounts of the emergence of actions, emotions,
and thoughts. We argue that dynamical systems theory (DST) is naturally suited to discussing
the temporal aspects of neural and behavioral phenomena, as well as how interactions—
within the brain and between the brain and external phenomena—unfold over time.
Since the cognitive revolution, neural processes have been routinely described in terms of
manipulations of discrete “states,” “symbols,” or “codes” (Falten, 2019). The prevailing analogy
used by this approach is the notion of “digital computing”: The brain is argued to “process infor-
mation” by flexibly rearranging between different states. This approach naturally leads to a view
of the brain as a mosaic of disjoint, independent functional units—consider the oversimplified
conception of the amygdala as exclusively devoted to processing “fear” (Pessoa & Adolphs,
2010). This strategy has generated a “parts list” for neural processes, but only rarely pays close
attention to how the parts interact in order to mediate the behavior of the system as a whole.
Moreover the information-processing framework contains latent anthropomorphic thinking:
coding, message-passing, and communication are metaphors that rely on the intuitive famil-
iarity of social interactions—their neurobiological underpinnings are often left unstated
(Falten, 2019).
In contrast to the view of the brain as a mosaic of quasi-independent functional units or
agents, DST frames neural phenomena in terms of trajectories governed by coupled differential
Gleichungen (Beurle, 1956; Caianiello, 1961; Corchs & Deco, 2004; Freeman, 1975; Griffith,
1963; Grossberg, 1967; Jirsa et al., 1994; Schoner & Kelso, 1988; Wilson & Cowan, 1972;
Zeeman, 1973). These equations naturally lend themselves to causal and mechanistic inter-
pretations, thereby cashing out anthropomorphic metaphors in terms of simpler biophysical
processes such as excitation and inhibition. While the mathematical research behind DST
has a long history, nonlinear dynamical systems exhibit behavior difficult to analyze without
Simulation. Advances in computational power have rendered DST much more tractable as a
tool for neuroimaging (Breakspear, 2017; Cabral et al., 2014; Deco et al., 2009, 2011, 2013A,
2013B, 2015, 2021; Deco & Jirsa, 2012; Ghosh et al., 2008; Gollo et al., 2015; Hlinka &
Coombes, 2012; Pillai & Jirsa, 2017; Sanz Perl et al., 2021; Shine et al., 2019A). Weiter,
the DST modeling framework has enabled simulations of neural dynamics that are predictive
and generative: simulated trajectories can be used to fit specific datasets (beim Graben et al.,
2019; Golos et al., 2015; Hansen et al., 2015; Koppe et al., 2019; Vyas et al., 2020), but can
also point researchers beyond data, Zum Beispiel, by contributing to experimental design and
facilitating integration of findings from different paradigms and species.
An exhaustive survey of DST is beyond the scope of this review, but the key concepts have
been described in depth in books accessible to neuroscientists (Durstewitz, 2017; Izhikevich,
2006; Rolls & Deco, 2010; Strogatz, 2015). Several neuroscience papers also serve as intro-
ductions to DST (Breakspear, 2017; Csete & Doyle, 2002; Favela, 2020, 2021; Müller, 2016;
Shine et al., 2021), so here we will focus on how to integrate these modes of thinking with a
Netzwerkneurowissenschaften
961
l
D
Ö
w
N
Ö
A
D
e
D
F
R
Ö
M
H
T
T
P
:
/
/
D
ich
R
e
C
T
.
M
ich
T
.
/
T
/
e
D
u
N
e
N
A
R
T
ich
C
e
–
P
D
l
F
/
/
/
/
/
6
4
9
6
0
2
0
5
6
2
5
5
N
e
N
_
A
_
0
0
2
3
0
P
D
.
T
F
B
j
G
u
e
S
T
T
Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3
Dynamical systems theory and human neuroimaging
functional, adaptive account of the brain. We will argue that DST is a lens that brings into
sharp focus certain aspects of neural processing that are left somewhat blurred through the
lens of the information-processing framework, including the importance of stability, flexibility,
nonlinearity, and history dependence. Dynamical modes of description are particularly
expressive for describing how humans and other animals pursue survival goals in ever-
changing situations in ways that are both stable and fluid. Genauer, we argue that
human neuroimaging, due to the availability of whole-brain sampling of brain dynamics, Ist
especially suited to leverage concepts from DST (Deco et al., 2015; Galadí et al., 2021;
Kringelbach & Deco, 2020). Wichtig, beneath the surface-level complexity and abstraction
of differential equations, DST enables a visual style of thinking that all neuroscientists can
make use of in order to uncover causal and functional mechanisms (Daunizeau et al., 2012;
Golos et al., 2015; Izhikevich, 2006; McIntosh & Jirsa, 2019; Rabinovich et al., 2006, 2015,
2020; Rabinovich & Varona, 2011; Shine et al., 2021; Wong & Wang, 2006).
In the first section of this review, we outline key concepts from DST that serve as building
blocks for intuitive models of neural function. We then go on to suggest three ways in which
current neuroimaging techniques can be productively combined with DST, thereby creating a
powerful new vantage point from which to view the brain.
l
D
Ö
w
N
Ö
A
D
e
D
F
R
Ö
M
H
T
T
P
:
/
/
D
ich
R
e
C
T
.
M
ich
T
.
T
/
/
e
D
u
N
e
N
A
R
T
ich
C
e
–
P
D
l
F
/
/
/
/
/
6
4
9
6
0
2
0
5
6
2
5
5
N
e
N
_
A
_
0
0
2
3
0
P
D
.
T
F
B
j
G
u
e
S
T
T
Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3
A VIEW OF THE BRAIN THROUGH THE DYNAMICAL SYSTEMS PRISM
Traditional functional analyses of brain areas have allowed researchers to identify statistically
reliable neural “puzzle pieces.” These methods give us insight into what a brain area or net-
work may functionally mediate, but not how this mediation unfolds in time, or better yet, Wie
coordinated interactions between the identified neural regions manifest as behavior. Our claim
is that DST is the ideal framework for piecing together this brain-behavior puzzle, given that it
foregrounds interaction and timing (McIntosh & Jirsa, 2019). Darüber hinaus, a dynamical systems
perspective may suggest principled ways to reformulate psychiatric conceptions (Durstewitz
et al., 2021) and “folk psychological” terms used to describe behavior, such as “attention,”
“memory,” “emotion,” and “cognition,” and the functions of a given region may be better
understood as integrated network-level trajectories rather than modular and localizable pro-
Prozesse (Hommel et al., 2019). Umgekehrt, the functions of some localized areas may be better
conceived in terms of their effects on network dynamics, rather than in terms of psychological
concepts.
DST characterizes how a system—a neuron, a circuit, or even the whole brain—changes
im Laufe der Zeit. A dynamical system is defined by its state space (or phase space), which character-
izes the configurations available to the system. The dimensions of the state space specify the
systems’ possible dynamics. Zum Beispiel, each dimension could be the firing rate of a neuron,
or the metabolic activity of voxels, or the intensity of a stimulus. At any instant of time, Die
system is understood as occupying a point in its state space; a trajectory is a path through the
state space, mapping how the values for each dimension change over time (Figur 1). Differ-
ential equations stipulate how the system’s trajectory will evolve over time from a chosen start-
ing point (the initial conditions).
DST enables concise descriptions of families of trajectories that share qualitative properties.
Zum Beispiel, if a family of trajectories all tend toward a particular region of state space, Dann
that region is called an attractor (the simplest of which is called a fixed point attractor). Der
parts of state space from which the system finds itself “drawn” to an attractor forms the corre-
sponding basin of attraction. The term “basin” here alludes to a valley in a mountain range — a
ball placed on any slope of a valley will roll to the bottom. Understanding a state space as a
962
State space:
A representation of all possible states
that can be attained by the system
(d.h., a point in state space).
Trajectory:
The time course of a system given a
particular set of initial conditions.
Attractor:
A region of one or more fixed points
that trajectories move towards.
Fixed point:
A point in state space where the system
is stationary (d.h., the derivative with
respect to time is zero).
Basin of attraction:
An area of state space from which
systems will evolve towards a
particular attractor.
Netzwerkneurowissenschaften
Dynamical systems theory and human neuroimaging
l
D
Ö
w
N
Ö
A
D
e
D
F
R
Ö
M
H
T
T
P
:
/
/
D
ich
R
e
C
T
.
M
ich
T
.
T
/
/
e
D
u
N
e
N
A
R
T
ich
C
e
–
P
D
l
F
/
/
/
/
/
6
4
9
6
0
2
0
5
6
2
5
5
N
e
N
_
A
_
0
0
2
3
0
P
D
.
T
F
B
j
G
u
e
S
T
T
Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3
Figur 1. Overview of state space concept. (A) Large multivariate recordings of brain activity, such as in neuroimaging datasets, can be more
tractable to analyze and visualize after first projecting the data into a state space sensitive to a desired feature in the data—for example,
principal components for variance, or independent components for distinct signals. (B) Upper panel: pitch-fork bifurcation diagram showing
a parameter change that transitions the system from a single stable attractor regime to a multistable regime with two stable attractors (blue lines)
and one unstable attractor (Rot, dotted line). Lower panel: a potential energy landscape depiction of the same unistable and multistable
regimes from above. (C) Identifying the attractor landscape of a system provides a reference for the system’s dynamics, which then predicts
distinct response to perturbation. External input to a system can be treated either as a perturbation to the system’s trajectory or as a deformation
of the system’s attractor landscape.
Repeller:
A region of one or more fixed points
that trajectories move away from.
Attractor landscape:
A state space containing multiple
basins of attraction.
landscape is an analogy that holds even in high-dimensional systems that cannot be visual-
isiert. The idea of an attractor provides an intuitive, mechanistic account of stability: a system
in an attractor can be bumped or perturbed, but as long as the system stays within the attractor
basin, it will eventually return to the bottom of the basin, like a marble rolling to the bottom of
a bathtub. Im Gegensatz, a repeller is an inverted attractor, and therefore analogous to the top of a
hill or a ridge: a system precariously balanced on a repeller.
The topography of fixed points isn’t always so clear cut. In der Tat, fixed points can contain
both attractive and repulsive properties, as is the case with a saddle node, which can be
thought of topographically as similar to a mountain pass—unstable in one direction (d.h.,
you could just as easily move backward or forward along the path) but stable in another
(d.h., it’s hard to climb the mountains on either side). Features such as saddle nodes inherently
increase the potential complexity of emergent dynamics; Jedoch, it is important to point out
that these qualitative features can only be identified when the differential equations of a system
are posited. This implies that assigning terms such as “attractor” or “saddle” to a family of
dynamic trajectories derived from data is necessarily dependent on the choice of model
and cannot be inferred directly from data.
The set of all possible motivational states of an animal is an example of an attractor land-
scape (Deco & Jirsa, 2012; Shine, 2021) or “energy” landscape (though the use of the term
“energy” is based on a mathematical analogy and need not possess the same physical dimen-
sions as energy). The attractor basin of any given goal-oriented state must not be too deep: if an
Netzwerkneurowissenschaften
963
Dynamical systems theory and human neuroimaging
animal becomes so unwavering in its search for food that it is not perturbed by the appearance
of a predator, then it is unlikely to survive for very long. Daher, behavioral flexibility requires
that certain stimuli can nudge the system from one attractor basin to another. Mit anderen Worten,
the trajectories of a flexible neural system are likely to traverse regions of state space that are
repellers, since such regions are poised to enter nearby attractor basins. Another example of an
attractor landscape is the space of perceptual targets that can capture attention (Rabinovich
et al., 2013). Focused, unwavering attention on a target might correspond to the system
being in a valley that is much deeper than neighboring ones, and from which the system cannot
easily be dislodged by distractors. Ähnlich, high distractibility should correspond to a landscape of
shallow attractors. Depending on the modeling goal, DST can be used to simulate how individual
psychological constructs change over time (z.B., anger; Hoeksma et al., 2007), or how mental states
shift across a landscape of multiple competing mental states, jostled by environmental forces
(Jirsa & Kelso, 2004; Riley & Holden, 2012; Tognoli & Kelso, 2014). Beyond attractors, Dort
are more subtle qualitative patterns, such as those associated with transient dynamics, Das
may be required to characterize trajectories exhibiting both recurring phases and variability
or flexibility (Rabinovich et al., 2008; Rabinovich & Varona, 2011).
These external transient stimuli can be considered using the language of DST: for a system
residing in state space, the only way for the system to move against the direction prescribed by
the space is through a perturbation. Tatsächlich, determining whether a perturbation is considered
“small,” or an attractor basin is considered “deep,” depends on their relative scales, sowie
the exact position of the system within the attractor basin. For a system occupying the deepest
point in a given attractor, perturbations below a certain scale will never push the system out of
the attractor basin. If a system has already been perturbed so that it is near the ridge separating
an attractor basin from that of an adjacent attractor, a relatively small push may be all that is
needed to disrupt stability (Abbildung 1C). In the case of attention, this implies that, Jedoch
focused an attentional state may be, there will be a distractor or combination of distractors
that will have sufficient magnitude to push the system out of the corresponding attractor basin.
Difficulties in maintaining attentional focus may arise from neural disruptions or developmen-
tal abnormalities that change the attractor depth of a target relative to the magnitude of per-
turbations, rendering attention easily captured by distractors (Duch, 2019; Iravani et al., 2021;
John et al., 2018).
There are theoretical tools that motivate segmenting the brain into quasi-independent sub-
Systeme; we will now argue that this parcellation is far more illuminating than the traditional
mosaic of functions. DST is not simply a taxonomy of attractors, repellers, and other qualitative
features of trajectories. Important insights are derived from the study of bifurcations: qualitativ
changes to state space that arise from smooth parameter changes. Parameters, also referred to
as “codimensions,” are distinct from the dimensions that define the state space. A typical
example of a bifurcation is the transition from quiescence to stable repetitive spiking in the
two-dimensional FitzHugh–Nagumo model and its descendants (FitzHugh, 1955; Izhikevich,
2006). In this simplification of the Hodgkin–Huxley model of the action potential, the excit-
atory input to the model neuron serves as a parameter, while the two dimensions are voltage
and recovery, which characterize the spiking behavior. Increasing the input can trigger a “sub-
critical Hopf bifurcation,” in which a point attractor, the stable quiescent state, becomes unsta-
ble and an attractive limit cycle forms, such as is the case for periodic action potentials. Als
with all concepts in DST, bifurcations have a precise meaning only when we specify the model
Gleichungen. But awareness of the general idea may point researchers toward mathematical
models and theoretical insight. Zum Beispiel, in the case of the motivational attractor land-
scape discussed above, a bifurcation could occur if the environment affords only one salient
Perturbation:
A small extrinsic change in the
position of the system in state space
(not governed by the system’s
differential equations).
Bifurcation:
A qualitative change in the behavior
of the system produced by a change
in a parameter of the differential
Gleichungen.
Limit cycle:
A region of state space that takes the
form of a closed, cyclic trajectory.
Netzwerkneurowissenschaften
964
l
D
Ö
w
N
Ö
A
D
e
D
F
R
Ö
M
H
T
T
P
:
/
/
D
ich
R
e
C
T
.
M
ich
T
.
T
/
/
e
D
u
N
e
N
A
R
T
ich
C
e
–
P
D
l
F
/
/
/
/
/
6
4
9
6
0
2
0
5
6
2
5
5
N
e
N
_
A
_
0
0
2
3
0
P
D
.
T
F
B
j
G
u
e
S
T
T
Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3
Dynamical systems theory and human neuroimaging
goal initially, but affords two, sagen, eating and mating, after a transition arising from a parameter
ändern, such as a decrease in perceived danger—the shift from one to two motivational attrac-
tors constitutes a bifurcation. Bifurcations have also been used to model the development of
psychiatric disorders such as depression (Ramirez-Mahaluf et al., 2017).
NEUROMODULATING THE MANIFOLD
What kinds of neural phenomena can deform the multidimensional attractor landscapes of the
Gehirn? Viewing neuromodulatory ligands such as dopamine, noradrenaline, and serotonin as
parameters of subnetworks in the brain may provide fresh perspectives on how the brain
flexibly alters its own low-dimensional neural dynamics. There is long-standing evidence that
neuromodulatory tone is tightly coupled to cognitive function, often by way of an inverted
U-shaped relationship (Arnsten, 1998)—for example, noradrenaline can transition an individual
from a disengaged state to an engaged mindset back to disengaged. To test whether these
capacities were linked to attractor landscape dynamics, Shine et al. (2018) mimicked the
effects of neuromodulatory tone on neuronal activity by altering neural gain—effectively
tuning how much influence individual populations in the network have over one another.
Increasing neural gain at intermediate levels of excitability caused an abrupt, nonlinear
increase in interregional synchrony that overlapped with empirical network topological signa-
tures observed when analyzing task-based fMRI data (Shine et al., 2016). This same model was
used to demonstrate a gain-mediated increase in interregional transfer entropy (Li et al., 2019).
Given the similarity in the mechanisms by which neuromodulatory chemicals impact neural
gain (Shine et al., 2021), we expect other neuromodulatory ligands to have similar effects
on network dynamics, with idiosyncrasies that betray their unique functions (Kringelbach
et al., 2020).
Neuromodulatory ligands can also enact more subtle effects on state space dynamics
(Figur 2). Zum Beispiel, Munn et al. (2021) used a combination of 7T fMRI and statistical phys-
ics to demonstrate that the activity patterns in key hubs of the ascending arousal system dif-
ferentially affect the brain’s attractor landscape. Speziell, activity in the locus coeruleus (Die
primary source of noradrenaline for the brain) was found to precede a flattening of the attractor
landscape and hence allowed the system to leave an attractor with a smaller perturbation than
was previously necessary. Im Gegensatz, blood flow in the basal nucleus of Meynert (the primary
source of cholinergic inputs to the cortex) was found to precede moments in which the brain
remained “stuck” in a deep well with a greatly diminished ability to escape. Wichtig, diese
changes are also tied to alterations in phenomenological states. By analyzing fMRI data
obtained during breath awareness meditation, Munn and colleagues found similar attractor
landscape dynamics linked to alterations in internal awareness—specifically, the moments
when meditators noticed that their thoughts had “wandered” from their breath. This phenom-
enon is also highly reminiscent of the notion of a noradrenaline-mediated “network reset”
(Sara & Bouret, 2012), which has also been used to explain switches in perceptual stability
associated with bistable images (Einhäuser et al., 2008), and hence may represent a fundamen-
tal feature of the intersection between neuromodulatory tone and network-level dynamics.
DYNAMICAL SYSTEMS THEORY FOR HUMAN NEUROIMAGING
Reframing neuroimaging data in the language of DST offers an exciting opportunity to inves-
tigate the brain using a precise language tailor-made for describing the distributed, dynamic,
and highly integrated nature of the brain. Following in the footsteps of pioneering studies in the
field that combined neuroimaging, computational modeling, and cognitive neuroscience tasks
Netzwerkneurowissenschaften
965
l
D
Ö
w
N
Ö
A
D
e
D
F
R
Ö
M
H
T
T
P
:
/
/
D
ich
R
e
C
T
.
M
ich
T
.
/
/
T
e
D
u
N
e
N
A
R
T
ich
C
e
–
P
D
l
F
/
/
/
/
/
6
4
9
6
0
2
0
5
6
2
5
5
N
e
N
_
A
_
0
0
2
3
0
P
D
T
.
F
B
j
G
u
e
S
T
T
Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3
Dynamical systems theory and human neuroimaging
Figur 2. Neuromodulating the manifold. (A) Using a neural mass model implemented in The Virtual Brain, the input-output curve defining
the activity of a slow variable was manipulated in two distinct ways: the sigmoid curve was steepened (links, neural gain) or amplified (Rechts,
excitability). (B) varying neural gain and excitability caused an abrupt switch in systems-level dynamics—by increasing neural gain, the system
shifted from a Segregated state ("S,” low phase synchrony) into an Integrated state ("ICH,” high phase synchrony). (C) Schematic diagram of
functional brain networks in the Segregated (d.h., “S”) and Integrated (d.h., “I”) phases—in the Integrated state, there are increased connections
present between otherwise isolated modules. (D) Upper panel: an energy landscape, which defines the energy required to move between
different brain states—by increasing response gain, noradrenaline is proposed to flatten the energy landscape (Rot); whereas by increasing
multiplicative gain, acetylcholine should deepen the energy wells (Grün). Lower panel: empirical BOLD trajectory energies as a function
of mean squared displacement (MSD) and sample time point (TR) of the baseline activity (black) and after phasic bursts in the locus coeruleus
(a key noradrenergic hub in the brainstem, Rot) and the basal nucleus of Meynert (the major source of cortical acetylcholine, Grün)—relative
to the baseline energy landscape phasic bursts in the locus coeruleus (Rot) lead to a flattening or reduction of the energy landscape, wohingegen
peaks in the basal nucleus of Meynert (Grün) lead to a raising of the energy landscape. Panels A–C adapted from (Li et al., 2019) and Panels
D–E adapted from (Munn et al., 2021).
to advance our understanding of the rules that govern dynamical activity in the brain (Kasten 1),
we identify three key principles through which neuroimaging researchers can adopt a dynam-
ical systems perspective: zooming out from the local to the global level, trading off static for
more dynamic descriptions of the brain, and moving from description to simulation (Figur 3).
By designing neuroimaging approaches that embrace each of these aspects, we hope to entice
the field toward more “ideal” experiments that will both expose the inner workings of the
Gehirn, but also identify more sensitive means for interacting with the complex, adaptive,
and dynamic nature of the brain.
Zooming Out to View the Whole Network
The popular “massively univariate” statistical parametric mapping (SPM; Figur 3) Ansatz
employed in most fMRI research precludes a deep understanding of the dynamic brain, with its
interconnections influencing each other and changing over time. In this traditional approach,
following careful preprocessing steps (Esteban et al., 2019), independent statistical models are
fit to a behavioral task paradigm (convolved with a hemodynamic response function or finite
impulse response model to account for hemodynamic delay) to the time course of either a
single voxel or an averaged, summary time series calculated from a (hopefully predefined)
region of interest. Such approaches have been successful in identifying regions with particular
functions (such as the fusiform face area), via the clustering of voxels independently identified
with statistical models that typically involve task contrasts (such as activation during face vs.
scene viewing). The early success of these methods has entrenched a relatively static mindset
among academics that hinders more detailed explanations involving multiple regions interact-
ing over time. While there are numerous examples of pioneering work examining whole-brain
neuroimaging with circuits-level explanations, we maintain that purely stationary statistical
models are insufficient for a mechanistic understanding of cognitive phenomena in both
healthy and diseased states.
Netzwerkneurowissenschaften
966
l
D
Ö
w
N
Ö
A
D
e
D
F
R
Ö
M
H
T
T
P
:
/
/
D
ich
R
e
C
T
.
M
ich
T
.
/
/
T
e
D
u
N
e
N
A
R
T
ich
C
e
–
P
D
l
F
/
/
/
/
/
6
4
9
6
0
2
0
5
6
2
5
5
N
e
N
_
A
_
0
0
2
3
0
P
D
.
T
F
B
j
G
u
e
S
T
T
Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3
Dynamical systems theory and human neuroimaging
Kasten 1. A spectrum of dynamical systems approaches in neuroimaging
Differential equations are becoming increasingly popular in DST modeling of neuroimaging
Daten (beim Graben et al., 2019; Kringelbach & Deco, 2020; Wang et al., 2019). Jedoch, als
in the case of data-oriented modeling techniques represented schematically in Figure 1,
differential equation-based methods occupy a continuous “feature space of models,” not all
of which use the full suite of DST concepts. Three key features have helped us make sense of
the ever-expanding literature on dynamical modeling and DST: (1) the extent of focus on qual-
itative or mechanistic explanations using qualitative patterns like attractors and bifurcations,
(2) the extent of focus on quantitative fitting of data, Und (3) the degree to which characteri-
zation of data is employed to explain behavior (Erkenntnis, emotion, and other processes).
While it is tempting to view qualitative and quantitative modeling as mutually exclusive
extremes on a continuum, it is possible for a single model to excel at both. Recent work dem-
onstrates that close attention to data and precise mechanistic models can go hand in hand
(Breakspear, 2017; Deco & Jirsa, 2012; Kringelbach & Deco, 2020; Shine et al., 2021; Wang
et al., 2019). Trotzdem, the sheer complexity of data, as well as the plurality of research
Ziele, means that there cannot be a “one-size-fits-all” approach to dynamical modeling of the
Gehirn. Im Idealfall, models that perform quantitative fitting and those that focus more on qualitative
characterization can mutually constrain and inspire each other.
The third highlighted feature of DST models—the mapping between brain dynamics and
behavior—in our view has the most scope for growth. Given the complexity of the brain, Es
is natural to treat it as a phenomenon on its own, rather than a central part of a wider set of
behavioral phenomena: Erkenntnis, emotion, and action. Given that these phenomena can
themselves be described in terms of dynamics, a key goal of DST in neuroimaging must be
to show, beyond mere correlation, how specific patterns of neural dynamics give rise to spe-
cific patterns of behavioral dynamics. Mit anderen Worten, the neuroimaging field will benefit from
DST models that not only generate accurate simulations and interface with lower level neural
mechanisms, but also provide a causal and functional account of the dynamics of emotions or
broad cognitive modes. Early steps in this direction include studies of meditation and sleep
that map DST concepts directly onto neuroimaging data (Deco et al., 2019; Galadí et al.,
2021; Melnychuk et al., 2018; Munn et al., 2021). Neuroimaging studies of clinical and psy-
chiatric conditions are beginning to be viewed through the DST lens, including epilepsy
(McIntosh & Jirsa, 2019), migraine (Dahlem & Isele, 2013), and schizophrenia (Loh et al.,
2007). There are many opportunities for close integration between DST as a way to study
neuroimaging data and DST as a perspective on how symptoms are generated, such as in
attention deficit hyperactivity disorder (Iravani et al., 2021), autism (Duch, 2019), and depres-
sion (Ramirez-Mahaluf et al., 2017).
Im Gegensatz, the DST approach has an inherent and direct link to underlying mechanisms.
Zum Beispiel, instead of performing a univariate analysis and reporting that a face viewing task
“activates” the fusiform face area, researchers could report how the entire brain activation pat-
terns shift from one state (while viewing scenes) to another (while viewing faces) and back
again over time. Even with a univariate analysis, this perspective could be supported by rou-
tinely including animation of fMRI activity, and by using unthresholded surface maps for
improved visualization. Multiecho sequences may even allow for sufficient denoising (Kundu
et al., 2017) to examine individual trials, precluding the need for the trial averaging that
occludes network states influencing activation movements. Unthresholded animation, espe-
cially denoised, could then hint at a trajectory between states. Crucially, this approach would
then offer additional steps, such as interrogating the likely neural processes that could have
caused the differences between cognitive capacities (assuming a good observational model),
or prediction of how the dynamics should change, given an intervention such as transcranial
magnetic stimulation or a suitably chosen pharmacological agent.
Netzwerkneurowissenschaften
967
l
D
Ö
w
N
Ö
A
D
e
D
F
R
Ö
M
H
T
T
P
:
/
/
D
ich
R
e
C
T
.
M
ich
T
.
/
/
T
e
D
u
N
e
N
A
R
T
ich
C
e
–
P
D
l
F
/
/
/
/
/
6
4
9
6
0
2
0
5
6
2
5
5
N
e
N
_
A
_
0
0
2
3
0
P
D
.
T
F
B
j
G
u
e
S
T
T
Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3
Dynamical systems theory and human neuroimaging
Figur 3. The space of analytic approaches in human neuroimaging. A nonexhaustive collection of different popular methods for analyzing
human neuroimaging data, embedded into a cube axes that highlight three key dynamical systems characteristics: Static-to-Dynamic (X),
Reverse-to-Forwards (j ), and Local-to-Global (z). We have argued that embracing the dynamical systems perspective requires moving to
the top right of the cube (d.h., the “Ideal Experiment”). While the theoretical goal of models should be dynamic, global, and built with forward
modeling in mind, multiple approaches are necessary for comprehensive understanding, especially the analysis of empirically obtained data
(the reverse approach). For further clarity, methods with high loading on the “Reverse” axis are colored red, and those high on the “Forwards”
axis are colored green. Note that some methods cover larger portions of this space than has been designated here (z.B., both PCA and ICA can
be used in either a dynamic or a static sense) and that the boxes should not be considered as strong limits for particular methods, but rather as
an approximate consensus for how particular methods are currently used by the majority of neuroimaging studies in the field. SPM = statistical
parametric mapping; FC = functional connectivity; MVPA = multivoxel pattern analysis; tvFC = time-varying functional connectivity; Dir. FC =
directed functional connectivity; PCA = principal components analysis; ICA = independent components analysis; ACF = autocorrelation func-
tion; DCM = dynamic causal modeling; SC = structural connectivity.
Multivariate analyses have been steadily growing in popularity over recent years. Diese
approaches begin with the assumption that neural representations are nonlocal: das ist, Das
the functional capacities of the brain rely on distributed patterns of activity that reflect the
influences that neural regions have over one another. The most widely adopted reverse
(d.h., data fitting) multivariate approaches for measuring these effects are functional connectiv-
ity fMRI (FC), seed-based and independent component analysis (ICA), multivoxel pattern anal-
ysis (MVPA), and the effective connectivity approaches of psychophysiological interactions
(PPI) and Granger causality (Figur 3). These methods provide insight into systems-level brain
organization: zum Beispiel, the idea of a set of modular communities (derived using functional
Konnektivität) that loosely relate to distinct functional capacities (Smith et al., 2009). Jedoch,
despite this clarity, it is important to note that these methods are still primarily focused on fit-
ting data rather than creating a generative model. Als solche, a substantial theoretical gap still
remains between the appearance of these patterns and the mechanistic processes that could
give rise to them. As we mentioned above, this problem can be mitigated in large part by
grounding our investigations of neuroimaging data in a dynamical systems framework.
Other popular methods are based on the justified assumption that neural activity is low
dimensional: the inherent degrees of freedom of neuroimaging data are typically far fewer than
the number of different recordings that sample the brain (Churchland et al., 2012; Durstewitz,
2017; Gallego et al., 2020; Gotts et al., 2020; Shine et al., 2019A, 2019B). Embracing this
assumption—using popular approaches such as principal component analysis (PCA) Und
ICA (Figur 3)—means that experimenters can reduce the number of independent variables
that they need to track, a process that makes both interpretation and modeling substantially
easier. In neuroimaging, the goal is typically to reduce the dimensionality of voxels or elec-
trodes such that what was once an unwieldy dataset can now be effectively tracked (Und
Netzwerkneurowissenschaften
968
l
D
Ö
w
N
Ö
A
D
e
D
F
R
Ö
M
H
T
T
P
:
/
/
D
ich
R
e
C
T
.
M
ich
T
.
T
/
/
e
D
u
N
e
N
A
R
T
ich
C
e
–
P
D
l
F
/
/
/
/
/
6
4
9
6
0
2
0
5
6
2
5
5
N
e
N
_
A
_
0
0
2
3
0
P
D
.
T
F
B
j
G
u
e
S
T
T
Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3
Dynamical systems theory and human neuroimaging
visualized) in low-dimensional (“state”) Raum. In a recent fMRI study, Shine et al. (2019A) gebraucht
PCA to reduce regional BOLD activity across multiple tasks to a set of low-dimensional com-
ponents that were then shown to link clearly to analyses based on cognitive neuroscience,
network neuroscience, DST, and neuromodulatory receptor expression. Crucially, certain crit-
ical assumptions of the dimensionality reduction approach are incompatible with aggressive
preprocessing steps often used to “clean” data (Gotts et al., 2020)—careful modeling clearly
shows that these strategies often “throw out the baby with the bathwater,” and hence should
be applied with abundant caution. Regardless, this approach only scratches the surface of the
potential for dimensionality reduction in systems neuroscience, as evidenced by the many
examples from nonhuman studies (Chaudhuri et al., 2019; Mastrogiuseppe & Ostojic, 2018;
Stringer et al., 2016).
Graph theory provides another means for embracing the distributed nature of neural activity
patterns (Spurns, 2015), enabling a more harmonious integration with DST. One such
approach treats regions of the brain as nodes of a network (or graph), and then defines the
edges between these nodes according to the strength of temporal similarity (zum Beispiel, verwenden
a Pearson’s correlation or wavelet coherence). Following this step, mathematical tools (Fornito
et al., 2016) can be used to infer topological properties of the network, das ist, those features
that are present in the data, irrespective of the specific implementation (Spurns, 2013), Und
how these properties change as a function of factors such as the cognitive demands of the task
(Shine & Poldrack, 2018). The approach is not without pitfalls, as seemingly trivial choices
(such as the presence and extent of edge thresholding) can have substantial impacts on the
conclusions inferred about particular cognitive capacities (Hallquist & Hillary, 2019). In ADDI-
tion, there is also evidence that the ability to decipher stable nodes can vary substantially as a
function of different cognitive tasks (Salehi et al., 2020). Despite these concerns, diese
approaches do reveal important aspects of the systems-level dynamics of the brain, and hence
are capable of generating predictions about how neural activity is grounded in the underlying
neurobiology. Two pertinent examples from recent work involve linking brain network inte-
gration to the diffuse projections of the ascending noradrenergic system (Munn et al., 2021;
Shine et al., 2016, 2018) and the matrix regions of the thalamus (Müller et al., 2020A, 2020B).
Shifting From Static to Dynamic
An organism is a constantly changing web of biophysical and electrochemical interactions. A
natural consequence of this organization is that the manner in which stimuli are processed
depends on the state of the organism at the precise moment that a stimulus arrives. In other
Wörter, the brain is inherently dynamic, and cannot be understood with mere static descriptions.
Zum Beispiel, it is essential to examine not only how activity levels in voxels change over time,
but also to model how voxels influence each other. Bedauerlicherweise, the majority of approaches
used in modern neuroimaging contain a hidden assumption of stationarity—when viewed
through the lens of DST, this amounts to assuming that the brain is always in the same position
in state space when a stimulus arrives, which is difficult to justify.
One simple way to incorporate dynamics into modern neuroimaging approaches is to
extend analyses beyond the typical assumptions of zero-lag correlation that permeate the field.
These patterns are not uninterpretable in their own right—for example, the robustness and rel-
ative invariance of static network parcellations derived from long fc-fMRI scans suggests a
form of slow dynamic stability, rather than an artifact of averaging. Jedoch, there is also evi-
dence that, by calculating functional connectivity patterns across an entire scan, investigators
potentially average across reconfigurations that occur over shorter time scales (Faskowitz
Netzwerkneurowissenschaften
969
l
D
Ö
w
N
Ö
A
D
e
D
F
R
Ö
M
H
T
T
P
:
/
/
D
ich
R
e
C
T
.
M
ich
T
.
/
T
/
e
D
u
N
e
N
A
R
T
ich
C
e
–
P
D
l
F
/
/
/
/
/
6
4
9
6
0
2
0
5
6
2
5
5
N
e
N
_
A
_
0
0
2
3
0
P
D
T
.
F
B
j
G
u
e
S
T
T
Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3
Dynamical systems theory and human neuroimaging
et al., 2020; Honey et al., 2007; Karahanoğlu & Van De Ville, 2015). Glücklicherweise, Methoden
exist to soften these constraints (Robinson et al., 2021). Zum Beispiel, tracking time-shifted cor-
relations in fMRI showed that the well-known zero-lag temporal correlation structure of intrin-
sic activity emerges as a consequence of neural trajectories, assessed by their lag structure
(Mitra et al., 2015) (Figur 3). At their extreme, these patterns can be interpreted as spatiotem-
poral traveling waves (Raut et al., 2021) or eigenmodes (Robinson et al., 2021), welche sind
amenable to dynamical systems modeling (Koch, 2021). Traveling wave models are an exam-
ple of a broad class of coarse-graining approaches in DST that include neural field, neural
mass and mean field models (Bojak et al., 2010; Byrne et al., 2020; Deco et al., 2013B; Müller
et al., 2020A; Shine et al., 2021; Wang et al., 2019). Another pertinent example comes from
the field of time-varying functional connectivity, which typically breaks a standard neuroim-
aging scan into smaller windows and then characterizes fluctuations in correlation patterns
im Laufe der Zeit (Lurie et al., 2020). In both cases, embracing the dynamics inherent in interregional
coordination can pave the way to more powerful generative models of the human brain and its
mediation of behavior.
A common criticism of fMRI is that the typical temporal resolution is slower than the time
scales of most perceptual and behavioral changes. While this is true for fast behavioral
choices, homeostatic processes in humans and other organisms necessarily take place at a
variety of temporal scales. The fastest perceptions and reactions are embedded in slow dynam-
ical trajectories that may correspond to phenomena such as mood, affect, or cognitive mode,
which in turn are embedded in even slower trajectories such as hormonal/circadian rhythms
und so weiter. The temporally and spatially coarse grained nature of whole-brain functional imag-
ing make it well suited to characterizing “quasi-invariants”—neural contexts within which per-
ception, thinking, and action are framed. Neural dynamics is organized across an intertwined
temporal hierarchy, with causal relationships operating in both directions. Zum Beispiel, slower
oscillations modulate fast oscillation (Tort et al., 2010), Und, psychologically, a sudden fright
may cause a lasting change of mood. As a first approximation, it is useful to think of slower
fMRI findings as a window into slow processes that set the context for faster processing.
Weiter, clever task designs can identify faster responses, on the order of hundreds of millisec-
onds (Lewis et al., 2018), so even faster dynamics can be studied.
Another potential barrier to application of dynamical analysis of fMRI is the fact that most
fMRI paradigms involve analysis of data from predetermined epochs, whether they are blocks
of stimuli or collections of rapidly presented events. While traditionally considered important
for ensuring effective signal-to-noise properties, the constraints imposed by these approaches
can limit the conclusions made about the dynamical processes at play. Darüber hinaus, a pure task-
based division of neural recordings will average out any functional variability that is indepen-
dent of the task structure. Mit anderen Worten, the underlying assumption is that all functionally
relevant neural dynamics are strongly correlated to the temporal division assumed by the
experimenter. Glücklicherweise, newer task structures such as movie watching (Finn & Bandettini,
2020; Meer et al., 2020) and videogames (Richlan et al., 2018) do not impose the event struc-
tures that are typically used in signal-averaging approaches. Stattdessen, dynamical models can be
constructed that predict how the trajectory of brain states will change in concert with the
videogame, and these simulations can then be compared with the fMRI data acquired.
The notion of attractor landscapes provides enticing links to whole-brain neuroimaging and
suggests a set of neural trajectories that can be applied to neuroimaging data. In this framing,
brain states evolve along the attractor landscape topography, much like a ball rolls under the
influence of gravity down a valley and requires energy to traverse up a hill, das entspricht
an evolution toward an attractive or repulsive brain state, jeweils. This technique can
Netzwerkneurowissenschaften
970
l
D
Ö
w
N
Ö
A
D
e
D
F
R
Ö
M
H
T
T
P
:
/
/
D
ich
R
e
C
T
.
M
ich
T
.
/
T
/
e
D
u
N
e
N
A
R
T
ich
C
e
–
P
D
l
F
/
/
/
/
/
6
4
9
6
0
2
0
5
6
2
5
5
N
e
N
_
A
_
0
0
2
3
0
P
D
.
T
F
B
j
G
u
e
S
T
T
Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3
Dynamical systems theory and human neuroimaging
resolve what might otherwise be obscured states of attraction (and repulsion) in a multistable
system and has been successfully applied to the dynamics of spiking neurons (Tkačik et al.,
2015), BOLD fMRI (Munn et al., 2021; Watanabe et al., 2013, 2014), and MEG (Krzemiński
et al., 2020). The approach offers several conceptual advances, but perhaps most importantly,
it renders the otherwise daunting task of systems-level interpretation relatively intuitive.
Wichtig, this framework extends beyond mere analogy, as the topography of the attractor
landscape shares a 1-to-1 correspondence with the generative equations required to synthe-
size realistic neural time series data (Breakspear, 2017). Zum Beispiel, Munn et al. (2021)
compared trajectories of BOLD activity following phasic bursts of subcortical regions of the
ascending arousal system, and by leveraging the attractor landscape approach it was appar-
ent adrenergic and cholinergic neuromodulation actively modulated the strength of an
attractor state.
Moving From Description to Simulation
All computational models in biology can be situated on a continuum from “reverse” to “for-
ward,” based on their relationship with experimental data (Gunawardena, 2014). Statistical
models proceed in the “reverse” direction: the modeling begins with experimental data and
then “reverse engineers” the causal mechanisms that generated the data. Im Gegensatz, “forward”
modeling starts with known or hypothetical causal mechanisms, which are used to generate
patterns that mirror key aspects of experimental data (Breakspear, 2017). Diese beiden
approaches were combined in what is arguably the most successful model in neuroscience,
the Hodgkin–Huxley model of action potential generation (Hodgkin & Huxley, 1952): the data
fitting facilitated the discovery of a system of differential equations that pointed toward the
mechanisms underlying action potential generation.
At scales larger than the single neuron, forward modeling becomes increasingly undercon-
strained by experimental data. There is also no consensus on the neurobiological underpin-
nings of neuroimaging techniques (Breakspear, 2017). But the lack of constraint by data does
not mean that forward models cannot be built: careful analysis of anatomy, behavior, and evo-
lutionary history can provide modelers with well-justified mechanisms that can be captured by
differential equations. Weiter, given the variability of neural and behavioral data, Es tut nicht
make sense for generative models to cleave too closely to specific quantitative recordings.
Qualitative descriptions and predictions can be more robust than quantitative data fits, as they
generalize more easily, being less sensitive to idiosyncratic features of specific experiments.
Zum Beispiel, the notion that acetylcholine and noradrenaline can modulate attractor land-
scape topography (Munn et al., 2021) can be imported into the design of future experiments,
not only in the context of meditation, but also to attention more broadly construed. It also
creates bridges with nonhuman research techniques that can directly manipulate these
neuromodulators.
There are existing software programs for simulating dynamical systems, such as the Brain
Dynamics Toolbox (Breakspear & Heitmann, 2010; Heitmann & Breakspear, 2018) und das
Virtual Brain (Ritter et al., 2013; Sanz-Leon et al., 2015; Schirner et al., 2021; Spiegler
et al., 2016). Using these tools, DST concepts can be directly tested through comparison of
model outputs with fMRI data. Jedoch, because the field of DST in neuroimaging is rapidly
evolving, software packages may be less flexible than custom simulations written in program-
ming languages like MATLAB, Python, or Julia. Zum Beispiel, custom code can be used to con-
struct layer-specific models that incorporate the precise, compartment-specific connectivity
principles that are present in the cerebral cortex (Braitenberg & Lauria, 1960; Du et al.,
Netzwerkneurowissenschaften
971
l
D
Ö
w
N
Ö
A
D
e
D
F
R
Ö
M
H
T
T
P
:
/
/
D
ich
R
e
C
T
.
M
ich
T
.
/
/
T
e
D
u
N
e
N
A
R
T
ich
C
e
–
P
D
l
F
/
/
/
/
/
6
4
9
6
0
2
0
5
6
2
5
5
N
e
N
_
A
_
0
0
2
3
0
P
D
.
T
F
B
j
G
u
e
S
T
T
Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3
Dynamical systems theory and human neuroimaging
2012; Havlicek & Uludağ, 2020; Stephan et al., 2019). Regardless of the computational
approach taken, the activity dynamics for each of the regions or neurons can be simulated,
and the activity can then be convolved with a canonical hemodynamic response function, oder
better yet, with more advanced models of hemodynamics (Aquino et al., 2012; Pang et al.,
2016). The output of this simulation can then be compared qualitatively with fMRI data col-
lected during an experiment, with further iterations of the model bringing theory into closer
contact with empirical data. This approach will be particularly powerful when combined with
advances in fast sampling-rate (Polimeni & Lewis, 2021) and layer-resolved fMRI recordings
(Huber et al., 2021; Polimeni et al., 2010), both of which will increase the precision with
which models can be integrated with neuroimaging data.
It is important to note that a key constraint imposed by computational models is the degree
of their abstraction from the “veridical”—the vast dimensionality of the adult human brain is
undoubtedly more complex than a typical neural model can realistically simulate, such that
even the most detailed computational model will likely lack the degrees of freedom to effec-
tively characterize the true nature of the dynamical system with sufficient clarity and robust-
ness. One way to mitigate this issue is to design modeling architectures to express a particular
feature of neuroanatomy, and then, after investigating any interesting implications of the fea-
tur, compare the outputs of the model with empirical neuronal recordings. The Virtual Brain
(Ritter et al., 2013; Sanz-Leon et al., 2015; Schirner et al., 2021) is an excellent example of a
toolbox that affords access to this approach, and has been used to demonstrate important links
between structure and function across many spatiotemporal scales. In these approaches, Benutzer
define the network structure and computational model of interest, and then manipulate which-
ever parameters are of experimental interest. A complementary approach is to design more
bespoke neural architectures, such as those that embrace interactions between the cerebral
cortex and thalamus, and then work to determine what the benefits and costs of such an archi-
tecture might be. Zum Beispiel, the presence of a population of relatively diffuse thalamocor-
tical projections (as is the case for matrix thalamic nuclei; Jones, 2001; Müller et al., 2020A);
can shift a network of corticothalamic neural masses into a quasi-critical regime characterized
by the continual formation and dissolution of neuronal ensembles in such a way that maxi-
mizes a trade-off between network integration and segregation (Müller et al., 2020B). Obwohl
these approaches can be quite insightful, it is important to remember to pick a scale of model-
ing that matches both the mechanism of interest, and the particular imaging technique that the
researcher is interested in interrogating.
A point worth stressing is that DST goes beyond the use of differential equations to fit data.
Zum Beispiel, some variations of DCM (Cao et al., 2019; Friston et al., 2019) focus on data
fitting but do not employ qualitative concepts such as attractor landscapes, limit cycles, oder
bifurcations, partly because they restrict themselves to the linear domain (Sadeghi et al.,
2020), whereas more sophisticated nonlinear variations do (Daunizeau et al., 2012; Roberts
et al., 2017A, 2017B). Models based on differential equations, whether linear or nonlinear, Sind
also generative, and can simulate hypothetical BOLD data. In addition to the capacity for
quantitative fits and simulations, DST offers conceptual tools that create bridges between data
and neural mechanisms. In principle, any neuroimaging outcome measure can be generated
by a well-designed forward model, but measures that embrace the complex, dynamical fea-
tures of biological data (Bizzarri et al., 2019; Juarrero, 2002) will likely lead to a more rich
causal understanding. Weiter, as we have mentioned at various points in this manuscript,
the qualitative tools of DST—attractors, bifurcations, metastability, etc.—not only help
account for data and neural processes, but also create natural links with the dynamics of
behavior and cognition (also see Box 1).
Netzwerkneurowissenschaften
972
l
D
Ö
w
N
Ö
A
D
e
D
F
R
Ö
M
H
T
T
P
:
/
/
D
ich
R
e
C
T
.
M
ich
T
.
/
T
/
e
D
u
N
e
N
A
R
T
ich
C
e
–
P
D
l
F
/
/
/
/
/
6
4
9
6
0
2
0
5
6
2
5
5
N
e
N
_
A
_
0
0
2
3
0
P
D
.
T
F
B
j
G
u
e
S
T
T
Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3
Dynamical systems theory and human neuroimaging
CONCLUSIONS
In this Review, we have argued that the DST framework has the potential to revolutionize the
analysis of neuroimaging data and how this data accounts for behavior, both in artificial task-
based protocols and more naturalistic situations such as movie watching. We have argued that
embracing this perspective will enable the discovery of otherwise latent links between neural
mechanisms and the patterns that we measure from standard imaging approaches, which in
turn can be used to rapidly augment our understanding of the brain, both in health and dis-
ease. Zum Beispiel, we argue that a renewed focus on time-varying dynamics via the identifi-
cation of qualitative but well-characterized dynamical phenomena (such as stability and limit
cycles) or ideally, the geometric or visual interpretation of results (z.B., in terms of attractor
basins or saddles) emergent in whole-brain neuroimaging data, will lead to rapid progress
in systems neuroscience. This paradigm shift is already well underway, as evidenced by
numerous papers that have used neuroimaging to derive measures of stability, entropy, Und
low-dimensional attractor manifolds as a function of different task contexts (Chaudhuri
et al., 2019; Koppe et al., 2019; Müller et al., 2020B; Munn et al., 2021).
There is much work to be done. Glücklicherweise, a major benefit of the DST approach is that
there exists a large corpus of fMRI data that can be reanalyzed within the frame imposed by
dynamical systems, potentially leading to major new insights into the brain bases of higher
order mental phenomena. Zu diesem Zweck, we strongly recommend that interested neuroscientists
reach out to and actively collaborate with computational modelers in order to build models
that can make predictions and build deeper intuition and explanation for the data already
acquired. Natürlich, the advent of higher spatial and temporal resolution data, and interven-
tional datasets like those that combine optogenetics with fMRI (Ryali et al., 2016), Wille
undoubtedly further accelerate progress. Nonlinear dynamical systems must be simulated,
so advances in computational power fuel advances in what can be understood with DST.
The synergistic interactions that will emerge between DST and imaging are a crucial step
toward the maturation of the field of systems neuroscience.
BEITRÄGE DES AUTORS
Yohan J. John: Konzeptualisierung; Writing – original draft; Writing – review & Bearbeitung. Kayle S.
Sawyer: Konzeptualisierung; Writing – original draft; Writing – review & Bearbeitung. Karthik
Srinivasan: Konzeptualisierung; Writing – original draft; Writing – review & Bearbeitung. Eli J. Müller:
Konzeptualisierung; Visualisierung; Writing – original draft; Writing – review & Bearbeitung. Brandon
R. Munn: Konzeptualisierung; Writing – original draft; Writing – review & Bearbeitung. James Shine:
Konzeptualisierung; Visualisierung; Writing – original draft; Writing – review & Bearbeitung.
FUNDING INFORMATION
James Shine, National Health and Medical Research Council (https://dx.doi.org/10.13039
/501100000925), Award ID: 1193857.
VERWEISE
Aquino, K. M., Schira, M. M., Robinson, P. A., Drysdale, P. M., &
Breakspear, M. (2012). Hemodynamic traveling waves in human
visual cortex. PLoS Computational Biology, 8(3), e1002435.
https://doi.org/10.1371/journal.pcbi.1002435, PubMed:
22457612
Arnsten, A. F. T. (1998). The biology of being frazzled. Wissenschaft,
280(5370), 1711–1712. https://doi.org/10.1126/science.280
.5370.1711, PubMed: 9660710
beim Graben, P., Jimenez-Marin, A., Diez, ICH., Cortes, J. M.,
Desroches, M., & Rodrigues, S. (2019). Metastable resting state
Netzwerkneurowissenschaften
973
l
D
Ö
w
N
Ö
A
D
e
D
F
R
Ö
M
H
T
T
P
:
/
/
D
ich
R
e
C
T
.
M
ich
T
.
T
/
/
e
D
u
N
e
N
A
R
T
ich
C
e
–
P
D
l
F
/
/
/
/
/
6
4
9
6
0
2
0
5
6
2
5
5
N
e
N
_
A
_
0
0
2
3
0
P
D
.
T
F
B
j
G
u
e
S
T
T
Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3
Dynamical systems theory and human neuroimaging
brain dynamics. Frontiers in Computational Neuroscience, 13, 62.
https://doi.org/10.3389/fncom.2019.00062, PubMed: 31551744
Beurle, R. (1956). Properties of a mass of cells capable of regener-
ating pulses. Philosophical Transactions of the Royal Society of
London. Serie B, Biological Sciences, 240(669), 55–94. https://
doi.org/10.1098/rstb.1956.0012
Bizzarri, M., Brash, D. E., Briscoe, J., Grieneisen, V. A., Stern, C. D.,
& Levin, M. (2019). A call for a better understanding of causation
in cell biology. Nature Reviews Molecular Cell Biology, 20(5),
261–262. https://doi.org/10.1038/s41580-019-0127-1, PubMed:
30962573
Bojak, ICH., Oostendorp, T. F., Reid, A. T., & Kötter, R. (2010). Con-
necting mean field models of neural activity to EEG and fMRI
Daten. Brain Topography, 23(2), 139–149. https://doi.org/10.1007
/s10548-010-0140-3, PubMed: 20364434
Braitenberg, V., & Lauria, F. (1960). Toward a mathematical
description of the grey substance of nervous systems. Il Nuovo
Cimento (1955–1965), 18(2), 149–165. https://doi.org/10.1007
/bf02783537
Breakspear, M. (2017). Dynamic models of large-scale brain activity.
Naturneurowissenschaften, 20(3), 340–352. https://doi.org/10.1038/nn
.4497, PubMed: 28230845
Breakspear, M., & Heitmann, S. (2010). Generative models of cor-
tical oscillations: Neurobiological implications of the Kuramoto
Modell. Grenzen der menschlichen Neurowissenschaften, 4, 190. https://doi.org
/10.3389/fnhum.2010.00190, PubMed: 21151358
Falten, R. (2019). Is coding a relevant metaphor for the brain?
Behavioral and Brain Sciences, 42, e215. https://doi.org/10.1017
/s0140525x19000049, PubMed: 30714889
Byrne, Á., O’Dea, R. D., Forrester, M., Ross, J., & Coombes, S.
(2020). Next-generation neural mass and field modeling. Zeitschrift
der Neurophysiologie, 123(2), 726–742. https://doi.org/10.1152/jn
.00406.2019, PubMed: 31774370
Cabral, J., Kringelbach, M. L., & Deco, G. (2014). Exploring the net-
work dynamics underlying brain activity during rest. Progress in
Neurobiology, 114, 102–131. https://doi.org/10.1016/j.pneurobio
.2013.12.005, PubMed: 24389385
Caianiello, E. R. (1961). Outline of a theory of thought-processes and
thinking machines. Journal of Theoretical Biology, 1, 204–235.
https://doi.org/10.1016/0022-5193(61)90046-7, PubMed:
13689819
Cao, X., Sandstede, B., & Luo, X. (2019). A functional data method
for causal dynamic network modeling of task-related fMRI. Fron-
tiers in Neuroscience, 13, 127. https://doi.org/10.3389/fnins
.2019.00127, PubMed: 30872989
Chaudhuri, R., Gerçek, B., Pandey, B., Peyrache, A., & Fiete, ICH.
(2019). The intrinsic attractor manifold and population dynamics
of a canonical cognitive circuit across waking and sleep. Natur
Neurowissenschaften, 22(9), 1512–1520. https://doi.org/10.1038
/s41593-019-0460-x, PubMed: 31406365
Churchland, M. M., Cunningham, J. P., Kaufman, M. T., Foster,
J. D., Nuyujukian, P., Ryu, S. ICH., & Shenoy, K. V. (2012). Neuronal
population dynamics during reaching. Natur, 487(7405),
51–56. https://doi.org/10.1038/nature11129, PubMed: 22722855
Corchs, S., & Deco, G. (2004). Feature-based attention in human
visual cortex: Simulation of fMRI data. NeuroImage, 21(1), 36–45.
https://doi.org/10.1016/j.neuroimage.2003.08.045, PubMed:
14741640
Csete, M. E., & Doyle, J. C. (2002). Reverse engineering of biolog-
ical complexity. Wissenschaft, 295(5560), 1664–1669. https://doi.org
/10.1126/science.1069981, PubMed: 11872830
Dahlem, M. A., & Isele, T. M. (2013). Transient localized wave
patterns and their application to migraine. The Journal of
Mathematical Neuroscience, 3(1), 7. https://doi.org/10.1186
/2190-8567-3-7, PubMed: 23718283
Daunizeau, J., Stephan, K. E., & Friston, K. J. (2012). Stochastic
dynamic causal modelling of fMRI data: Should we care about
neural noise? NeuroImage, 62(1), 464–481. https://doi.org/10
.1016/j.neuroimage.2012.04.061, PubMed: 22579726
Deco, G., Cruzat, J., Cabral, J., Tagliazucchi, E., Laufs, H., Logothetis,
N. K., & Kringelbach, M. L. (2019). Awakening: Predicting external
stimulation to force transitions between different brain states.
Verfahren der Nationalen Akademie der Wissenschaften, 116(36),
18088–18097. https://doi.org/10.1073/pnas.1905534116,
PubMed: 31427539
Deco, G., & Jirsa, V. K. (2012). Ongoing cortical activity at rest:
Criticality, multistability, and ghost attractors. Journal of Neuro-
Wissenschaft, 32(10), 3366–3375. https://doi.org/10.1523/jneurosci
.2523-11.2012, PubMed: 22399758
Deco, G., Jirsa, V. K., & McIntosh, A. R. (2011). Emerging concepts
for the dynamical organization of resting-state activity in the
Gehirn. Nature Reviews Neurowissenschaften, 12(1), 43–56. https://doi
.org/10.1038/nrn2961, PubMed: 21170073
Deco, G., Jirsa, V. K., & McIntosh, A. R. (2013A). Resting brains
never rest: Computational insights into potential cognitive archi-
tectures. Trends in den Neurowissenschaften, 36(5), 268–274. https://doi.org
/10.1016/j.tins.2013.03.001, PubMed: 23561718
Deco, G., Kringelbach, M. L., Arnatkeviciute, A., Oldham, S.,
Sabaroedin, K., Rogasch, N. C., Aquino, K. M., & Fornito, A.
(2021). Dynamical consequences of regional heterogeneity in the
brain’s transcriptional landscape. Science Advances, 7(29), eabf4752.
https://doi.org/10.1126/sciadv.abf4752, PubMed: 34261652
Deco, G., Ponce-Alvarez, A., Mantini, D., Roma, G. L., Hagmann, P.,
& Corbetta, M. (2013B). Resting-state functional connectivity
emerges from structurally and dynamically shaped slow linear
fluctuations. Zeitschrift für Neurowissenschaften, 33(27), 11239–11252. https://
doi.org/10.1523/jneurosci.1091-13.2013, PubMed: 23825427
Deco, G., Rolls, E. T., & Romo, R. (2009). Stochastic dynamics as a
principle of brain function. Fortschritte in der Neurobiologie, 88(1), 1–16.
https://doi.org/10.1016/j.pneurobio.2009.01.006, PubMed:
19428958
Deco, G., Tononi, G., Boly, M., & Kringelbach, M. L. (2015).
Rethinking segregation and integration: Contributions of whole-
brain modelling. Nature Reviews Neurowissenschaften, 16(7), 430–439.
https://doi.org/10.1038/nrn3963, PubMed: 26081790
Von, J., Vegh, V., & Reutens, D. C. (2012). The laminar cortex model:
A new continuum cortex model incorporating laminar architec-
tur. PLoS Computational Biology, 8(10), e1002733. https://doi
.org/10.1371/journal.pcbi.1002733, PubMed: 23093925
Duch, W. (2019). Autism spectrum disorder and deep attractors in
neurodynamics. In V. Cutsuridis (Ed.), Multiscale Models of Brain
Disorders (S. 135–146). Springer International Publishing.
https://doi.org/10.1007/978-3-030-18830-6_13
Durstewitz, D. (2017). Advanced data analysis in neuroscience:
Integrating statistical and computational models. New York, New York:
Springer. https://doi.org/10.1007/978-3-319-59976-2
Netzwerkneurowissenschaften
974
l
D
Ö
w
N
Ö
A
D
e
D
F
R
Ö
M
H
T
T
P
:
/
/
D
ich
R
e
C
T
.
M
ich
T
.
/
T
/
e
D
u
N
e
N
A
R
T
ich
C
e
–
P
D
l
F
/
/
/
/
/
6
4
9
6
0
2
0
5
6
2
5
5
N
e
N
_
A
_
0
0
2
3
0
P
D
T
.
F
B
j
G
u
e
S
T
T
Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3
Dynamical systems theory and human neuroimaging
Durstewitz, D., Huys, Q. J. M., & Koppe, G. (2021). Psychiatric ill-
nesses as disorders of network dynamics. Biologische Psychiatrie:
Cognitive Neuroscience and NeuroImaging, 6(9), 865–876.
https://doi.org/10.1016/j.bpsc.2020.01.001, PubMed: 32249208
Einhäuser, W., Stout, J., Koch, C., & Fuhrmann, Ö. (2008). Pupil dilation
reflects perceptual selection and predicts subsequent stability in
perceptual rivalry. Proceedings of the National Academy of Sci-
ences of the United States of America, 105(5), 1704–1709.
https://doi.org/10.1073/pnas.0707727105, PubMed: 18250340
Esteban, O., Markiewicz, C. J., Blair, R. W., Moodie, C. A., Isik,
A. ICH., Erramuzpe, A., … Gorgolewski, K. J. (2019). fMRIPrep: A
robust preprocessing pipeline for functional MRI. Natur
Methoden, 16(1), 111–116. https://doi.org/10.1038/s41592-018
-0235-4, PubMed: 30532080
Faskowitz, J., Esfahlani, F. Z., Jo, Y., Spurns, O., & Betzel, R. F.
(2020). Edge-centric functional network representations of
human cerebral cortex reveal overlapping system-level architec-
tur. Naturneurowissenschaften, 23(12), 1644–1654. https://doi.org/10
.1038/s41593-020-00719-y, PubMed: 33077948
Favela, L. H. (2020). Dynamical systems theory in cognitive
science and neuroscience. Philosophy Compass, 15(8), e12695.
https://doi.org/10.1111/phc3.12695
Favela, L. H. (2021). The dynamical renaissance in neuroscience.
Synthese, 199(1), 2103–2127. https://doi.org/10.1007/s11229
-020-02874-j
Finn, E. S., & Bandettini, P. A. (2020). Movie-watching outperforms
rest for functional connectivity-based prediction of behavior.
bioRxiv. https://doi.org/10.1101/2020.08.23.263723
FitzHugh, R. (1955). Mathematical models of threshold phenomena
in the nerve membrane. The Bulletin of Mathematical Biophys-
ics, 17(4), 257–278. https://doi.org/10.1007/bf02477753
Fornito, A., Zalesky, A., & Bullmore, E. T. (2016). Fundamentals of
brain network analysis. Amsterdam, die Niederlande: Elsevier/
Academic Press.
Freeman, W. J. (1975). Mass action in the nervous system: Exami-
nation of the neurophysiological basis of adaptive behavior
through the EEG. New York, New York: Academic Press. https://doi
.org/10.1016/C2009-0-03145-6
Friston, K. J., Preller, K. H., Mathys, C., Cagnan, H., Heinzle, J.,
Razi, A., & Zeidman, P. (2019). Dynamic causal modelling revis-
ited. NeuroImage, 199, 730–744. https://doi.org/10.1016/j
.neuroimage.2017.02.045, PubMed: 28219774
Galadí, J. A., Silva Pereira, S., Sanz Perl, Y., Kringelbach, M. L.,
Gayte, ICH., Laufs, H., Tagliazucchi, E., Langa, J. A., & Deco, G.
(2021). Capturing the non-stationarity of whole-brain dynamics
underlying human brain states. NeuroImage, 244, 118551.
https://doi.org/10.1016/j.neuroimage.2021.118551, PubMed:
34506913
Gallego, J. A., Perich, M. G., Chowdhury, R. H., Solla, S. A., &
Müller, L. E. (2020). Long-term stability of cortical population
dynamics underlying consistent behavior. Naturneurowissenschaften,
23(2), 260–270. https://doi.org/10.1038/s41593-019-0555-4,
PubMed: 31907438
Ghosh, A., Rho, Y., McIntosh, A. R., Kotter, R., & Jirsa, V. K. (2008).
Noise during rest enables the exploration of the brain’s dynamic
Repertoire. PLoS Computational Biology, 4(10), e1000196.
https://doi.org/10.1371/journal.pcbi.1000196, PubMed:
18846206
Gollo, L. L., Zalesky, A., Hutchison, R. M., van den Heuvel, M., &
Breakspear, M. (2015). Dwelling quietly in the rich club: Gehirn
network determinants of slow cortical fluctuations. Philosophical
Transactions of the Royal Society B: Biological Sciences, 370(1668),
20140165. https://doi.org/10.1098/rstb.2014.0165, PubMed:
25823864
Golos, M., Jirsa, V., & Daucé, E. (2015). Multistability in large scale
models of brain activity. PLoS Computational Biology, 11(12),
e1004644. https://doi.org/10.1371/journal.pcbi.1004644,
PubMed: 26709852
Gotts, S. J., Gilmore, A. W., & Martin, A. (2020). Brain networks,
dimensionality, and global signal averaging in resting-state fMRI:
Hierarchical network structure results in low-dimensional spatio-
temporal dynamics. NeuroImage, 205, 116289. https://doi.org/10
.1016/j.neuroimage.2019.116289, PubMed: 31629827
Griffith, J. S. (1963). A field theory of neural nets: ICH. Derivation of field
Gleichungen. The Bulletin of Mathematical Biophysics, 25, 111–120.
https://doi.org/10.1007/BF02477774, PubMed: 13950415
Grossberg, S. (1967). Nonlinear difference-differential equations in
prediction and learning theory. Verfahren des Nationalen
Academy of Sciences of the United States of America, 58(4),
1329–1334. https://doi.org/10.1073/pnas.58.4.1329, PubMed:
5237867
Gunawardena, J. (2014). Models in biology: “Accurate descriptions
of our pathetic thinking.” BMC Biology, 12, 29. https://doi.org/10
.1186/1741-7007-12-29, PubMed: 24886484
Hallquist, M. N., & Hillary, F. G. (2019). Graph theory approaches
to functional network organization in brain disorders: A critique
for a brave new small-world. Netzwerkneurowissenschaften, 3(1), 1–26.
https://doi.org/10.1162/netn_a_00054, PubMed: 30793071
Hansen, E. C. A., Battaglia, D., Spiegler, A., Deco, G., & Jirsa, V. K.
(2015). Functional connectivity dynamics: Modeling the
switching behavior of the resting state. NeuroImage, 105,
525–535. https://doi.org/10.1016/j.neuroimage.2014.11.001,
PubMed: 25462790
Havlicek, M., & Uludağ, K. (2020). A dynamical model of the lam-
inar BOLD response. NeuroImage, 204, 116209. https://doi.org
/10.1016/j.neuroimage.2019.116209, PubMed: 31546051
Heitmann, S., & Breakspear, M. (2018). Putting the “dynamic” back
into dynamic functional connectivity. Netzwerkneurowissenschaften, 2(2),
150–174. https://doi.org/10.1162/netn_a_00041, PubMed:
30215031
Hlinka, J., & Coombes, S. (2012). Using computational models to
relate structural and functional brain connectivity. European Jour-
nal of Neuroscience, 36(2), 2137–2145. https://doi.org/10.1111/j
.1460-9568.2012.08081.X, PubMed: 22805059
Hodgkin, A. L., & Huxley, A. F. (1952). A quantitative description of
membrane current and its application to conduction and excita-
tion in nerve. The Journal of Physiology, 117(4), 500–544. https://
doi.org/10.1113/jphysiol.1952.sp004764, PubMed: 12991237
Hoeksma, J. B., Oosterlaan, J., Schipper, E., & Koot, H. (2007).
Finding the attractor of anger: Bridging the gap between dynamic
concepts and empirical data. Emotion, 7(3), 638–648. https://doi
.org/10.1037/1528-3542.7.3.638, PubMed: 17683219
Hommel, B., Chapman, C. S., Cisek, P., Neyedli, H. F., Song, J.-H., &
Welsh, T. N. (2019). No one knows what attention is. Attention,
Wahrnehmung, & Psychophysics, 81(7), 2288–2303. https://doi.org
/10.3758/s13414-019-01846-w, PubMed: 31489566
Netzwerkneurowissenschaften
975
l
D
Ö
w
N
Ö
A
D
e
D
F
R
Ö
M
H
T
T
P
:
/
/
D
ich
R
e
C
T
.
M
ich
T
.
/
/
T
e
D
u
N
e
N
A
R
T
ich
C
e
–
P
D
l
F
/
/
/
/
/
6
4
9
6
0
2
0
5
6
2
5
5
N
e
N
_
A
_
0
0
2
3
0
P
D
.
T
F
B
j
G
u
e
S
T
T
Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3
Dynamical systems theory and human neuroimaging
Honey, C. J., Kotter, R., Breakspear, M., & Spurns, Ö. (2007). Net-
work structure of cerebral cortex shapes functional connectivity
on multiple time scales. Proceedings of the National Academy of
Wissenschaften, 104(24), 10240–10245. https://doi.org/10.1073/pnas
.0701519104, PubMed: 17548818
Huber, L., Finn, E. S., Chai, Y., Goebel, R., Stirnberg, R., Stöcker, T.,
Marrett, S., Uludag, K., Kim, S.-G., Han, S., Bandettini, P. A., &
Poser, B. A. (2021). Layer-dependent functional connectivity
Methoden. Fortschritte in der Neurobiologie, 207, 101835. https://doi
.org/10.1016/j.pneurobio.2020.101835, PubMed: 32512115
Iravani, B., Arshamian, A., Fransson, P., & Kaboodvand, N. (2021).
Whole-brain modelling of resting state fMRI differentiates ADHD
subtypes and facilitates stratified neuro-stimulation therapy. Neu-
roImage, 231, 117844. https://doi.org/10.1016/j.neuroimage
.2021.117844, PubMed: 33577937
Izhikevich, E. M. (2006). Dynamical systems in neuroscience: Der
geometry of excitability and bursting. Cambridge, MA: MIT Press.
https://doi.org/10.7551/mitpress/2526.001.0001
Jirsa, V. K., Friedrich, R., Haken, H., & Kelso, J. A. S. (1994). A the-
oretical model of phase transitions in the human brain. Biologisch
Cybernetics, 71(1), 27–35. https://doi.org/10.1007/BF00198909,
PubMed: 8054384
Jirsa, V. K., & Kelso, S. (Hrsg.). (2004). Coordination dynamics: Issues
and trends. New York, New York: Springer. https://doi.org/10.1007/978
-3-540-39676-5
John, Y. J., Zikopoulos, B., Bullock, D., & Widerhaken, H. (2018). Visual
attention deficits in schizophrenia can arise from inhibitory dys-
function in thalamus or cortex. Computational Psychiatry, 2,
223–257. https://doi.org/10.1162/cpsy_a_00023, PubMed:
30627672
Jones, E. G. (2001). The thalamic matrix and thalamocortical syn-
chrony. Trends in den Neurowissenschaften, 24(10), 595–601. https://doi
.org/10.1016/S0166-2236(00)01922-6, PubMed: 11576674
Juarrero, A. (2002). Dynamics in action: Intentional behavior as a
complex system. Cambridge, MA: MIT Press.
Karahanoğlu, F. ICH., & Van De Ville, D. (2015). Transient brain activity
disentangles fMRI resting-state dynamics in terms of spatially and
temporally overlapping networks. Nature Communications, 6,
7751. https://doi.org/10.1038/ncomms8751, PubMed: 26178017
(2021). Data-driven modeling of nonlinear traveling
waves. Chaos: An Interdisciplinary Journal of Nonlinear Science,
31(4), 043128. https://doi.org/10.1063/5.0043255, PubMed:
34251251
Koch,
J.
Koppe, G., Toutounji, H., Kirsch, P., Lis, S., & Durstewitz, D.
(2019). Identifying nonlinear dynamical systems via generative
recurrent neural networks with applications to fMRI. PLoS Com-
putational Biology, 15(8), e1007263. https://doi.org/10.1371
/journal.pcbi.1007263, PubMed: 31433810
Kringelbach, M. L., Cruzat, J., Cabral, J., Knudsen, G. M., Carhart-
Harris, R., Whybrow, P. C., Logothetis, N. K., & Deco, G. (2020).
Dynamic coupling of whole-brain neuronal and neurotransmitter
Systeme. Verfahren der Nationalen Akademie der Wissenschaften, 117(17),
9566–9576. https://doi.org/10.1073/pnas.1921475117, PubMed:
32284420
Kringelbach, M. L., & Deco, G. (2020). Brain states and transitions:
Insights from computational neuroscience. Cell Reports, 32(10),
108128. https://doi.org/10.1016/j.celrep.2020.108128, PubMed:
32905760
Krzemiński, D., Masuda, N., Hamandi, K., Singh, K. D., Routley, B.,
& Zhang, J. (2020). Energy landscape of resting magnetoenceph-
alography reveals fronto-parietal network impairments in epi-
lepsy. Netzwerkneurowissenschaften, 4(2), 374–396. https://doi.org/10
.1162/netn_a_00125, PubMed: 32537532
Kuhn, T. S. (1962). The structure of scientific revolutions. Chicago,
IL: University of Chicago Press.
Kundu, P., Voon, V., Balchandani, P., Lombardo, M. V., Poser, B. A.,
& Bandettini, P. A. (2017). Multi-echo fMRI: A review of applica-
tions in fMRI denoising and analysis of BOLD signals. Neuro-
Image, 154, 59–80. https://doi.org/10.1016/j.neuroimage.2017
.03.033, PubMed: 28363836
Lewis, L. D., Setsompop, K., Rosen, B. R., & Polimeni, J. R. (2018).
Stimulus-dependent hemodynamic response timing across the
human subcortical-cortical visual pathway identified through
high spatiotemporal resolution 7T fMRI. NeuroImage, 181,
279–291. https://doi.org/10.1016/j.neuroimage.2018.06.056,
PubMed: 29935223
Li, M., Han, Y., Aburn, M. J., Breakspear, M., Poldrack, R. A., Shine,
J. M., & Lizier, J. T. (2019). Transitions in information processing
dynamics at the whole-brain network level are driven by alter-
ations in neural gain. PLoS Computational Biology, 15(10),
e1006957. https://doi.org/10.1371/journal.pcbi.1006957,
PubMed: 31613882
Loh, M., Rolls, E. T., & Deco, G. (2007). A dynamical systems
hypothesis of schizophrenia. PLoS Computational Biology, 3(11),
e228. https://doi.org/10.1371/journal.pcbi.0030228, PubMed:
17997599
Lurie, D. J., Kessler, D., Bassett, D. S., Betzel, R. F., Breakspear, M.,
Kheilholz, S., … Calhoun, V. D. (2020). Questions and controver-
sies in the study of time-varying functional connectivity in resting
fMRT. Netzwerkneurowissenschaften, 4(1), 30–69. https://doi.org/10
.1162/netn_a_00116, PubMed: 32043043
Mastrogiuseppe, F., & Ostojic, S. (2018). Linking connectivity,
Dynamik, and computations in low-rank recurrent neural net-
funktioniert. Neuron, 99(3), 609–623. https://doi.org/10.1016/j
.neuron.2018.07.003, PubMed: 30057201
McIntosh, A. R., & Jirsa, V. K. (2019). The hidden repertoire of
brain dynamics and dysfunction. Netzwerkneurowissenschaften, 3(4),
994–1008. https://doi.org/10.1162/netn_a_00107, PubMed:
31637335
Meer, J. N. van den, Breakspear, M., Chang, L. J., Sonkusare, S.,
& Cocchi, L. (2020). Movie viewing elicits rich and reliable
brain state dynamics. Nature Communications, 11(1), 5004.
https://doi.org/10.1038/s41467-020-18717-w, PubMed:
33020473
Melnychuk, M. C., Dockree, P. M., O’Connell, R. G., Murphy, P. R.,
Balsters, J. H., & Robertson, ICH. H. (2018). Coupling of respiration
and attention via the locus coeruleus: Effects of meditation and
pranayama. Psychophysiology, 55(9), e13091. https://doi.org/10
.1111/psyp.13091, PubMed: 29682753
Müller, P. (2016). Dynamical systems, attractors, and neural circuits.
F1000Research, 5, F1000 Faculty Rev-992. https://doi.org/10
.12688/f1000research.7698.1, PubMed: 27408709
Mitra, A., Snyder, A. Z., Blazey, T., & Rachel, M. E. (2015). Lag
threads organize the brain’s intrinsic activity. Proceedings of
the National Academy of Sciences, 112(17), E2235–E2244.
https://doi.org/10.1073/pnas.1503960112, PubMed: 25825720
Netzwerkneurowissenschaften
976
l
D
Ö
w
N
Ö
A
D
e
D
F
R
Ö
M
H
T
T
P
:
/
/
D
ich
R
e
C
T
.
M
ich
T
.
/
/
T
e
D
u
N
e
N
A
R
T
ich
C
e
–
P
D
l
F
/
/
/
/
/
6
4
9
6
0
2
0
5
6
2
5
5
N
e
N
_
A
_
0
0
2
3
0
P
D
T
.
F
B
j
G
u
e
S
T
T
Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3
Dynamical systems theory and human neuroimaging
Müller, E. J., Munn, B., Hearne, L. J., Schmied, J. B., Fulcher, B., Cocchi,
L., & Shine, J. M. (2020A). Core and matrix thalamic sub-
populations relate to spatio-temporal cortical connectivity
gradients. bioRxiv. https://doi.org/10.1101/2020.02.28.970350
Müller, E. J., Munn, B. R., & Shine, J. M. (2020B). Diffuse neural
coupling mediates complex network dynamics through the for-
mation of quasi-critical brain states. Nature Communications,
11(1), 6337. https://doi.org/10.1038/s41467-020-19716-7,
PubMed: 33303766
Munn, B., Müller, E. J., Wainstein, G., & Shine, J. M. (2021). Der
ascending arousal system shapes low-dimensional neural dynamics
to mediate awareness of intrinsic cognitive states. Nature Commu-
nications, 12, 6016. https://doi.org/10.1038/s41467-021-26268-x,
PubMed: 34650039
Pang, J. C., Robinson, P. A., & Aquino, K. M. (2016). Response-
mode decomposition of spatio-temporal haemodynamics. Jour-
nal of the Royal Society Interface, 13(118), 20160253. https://
doi.org/10.1098/rsif.2016.0253, PubMed: 27170653
Pessoa, L., & Adolphs, R. (2010). Emotion processing and the
amygdala: From a “low road” to “many roads” of evaluating
biological significance. Nature Reviews Neurowissenschaften, 11(11),
773–782. https://doi.org/10.1038/nrn2920, PubMed: 20959860
Pillai, A. S., & Jirsa, V. K. (2017). Symmetry breaking in space-time
hierarchies shapes brain dynamics and behavior. Neuron, 94(5),
1010–1026. https://doi.org/10.1016/j.neuron.2017.05.013,
PubMed: 28595045
Polimeni, J. R., Fischl, B., Greve, D. N., & Wald, L. L. (2010).
Laminar analysis of 7T BOLD using an imposed spatial activation
pattern in human V1. NeuroImage, 52(4), 1334–1346. https://doi
.org/10.1016/j.neuroimage.2010.05.005, PubMed: 20460157
Polimeni, J. R., & Lewis, L. D. (2021). Imaging faster neural dynamics
with fast fMRI: A need for updated models of the hemodynamic
response. Fortschritte in der Neurobiologie, 207, 102174. https://doi.org
/10.1016/j.pneurobio.2021.102174, PubMed: 34525404
Rabinovich, M. ICH., Huerta, R., Varona, P., & Afraimovich, V. S.
(2008). Transient cognitive dynamics, metastability, and decision
Herstellung. PLoS Computational Biology, 4(5), e1000072. https://doi
.org/10.1371/journal.pcbi.1000072, PubMed: 18452000
Rabinovich, M. ICH., Simmons, A. N., & Varona, P. (2015). Dynamical
bridge between brain and mind. Trends in den Kognitionswissenschaften,
19(8), 453–461. https://doi.org/10.1016/j.tics.2015.06.005,
PubMed: 26149511
Rabinovich, M. ICH., Tristan, ICH., & Varona, P. (2013). Neural dynamics
of attentional cross-modality control. PLoS One, 8(5), e64406.
https://doi.org/10.1371/journal.pone.0064406, PubMed:
23696890
Rabinovich, M. ICH., & Varona, P. (2011). Robust transient dynamics
and brain functions. Frontiers in Computational Neuroscience,
5, 24. https://doi.org/10.3389/fncom.2011.00024, PubMed:
21716642
Rabinovich, M. ICH., Varona, P., Selverston, A. ICH., & Abarbanel, H. D. ICH.
(2006). Dynamical principles in neuroscience. Reviews of
Modern Physics, 78(4), 1213–1265. https://doi.org/10.1103
/RevModPhys.78.1213
Rabinovich, M. ICH., Zaks, M. A., & Varona, P. (2020). Sequential
dynamics of complex networks in mind: Consciousness and
Kreativität. Physics Reports, 883, 1–32. https://doi.org/10.1016/j
.physrep.2020.08.003
Ramirez-Mahaluf, J. P., Roxin, A., Mayberg, H. S., & Compte, A.
(2017). A computational model of major depression: Die Rolle
of glutamate dysfunction on cingulo-frontal network dynamics.
Hirnrinde, 27(1), 660–679. https://doi.org/10.1093/cercor
/bhv249, PubMed: 26514163
Raut, R. V., Snyder, A. Z., Mitra, A., Yellin, D., Fujii, N., Malach, R.,
& Rachel, M. E. (2021). Global waves synchronize the brain’s
functional systems with fluctuating arousal. Science Advances,
7(30), eabf2709. https://doi.org/10.1126/sciadv.abf2709,
PubMed: 34290088
Richlan, F., Schubert, J., Mayer, R., Hutzler, F., & Kronbichler, M.
(2018). Action video gaming and the brain: FMRI effects without
behavioral effects in visual and verbal cognitive tasks. Brain and
Behavior, 8(1), e00877. https://doi.org/10.1002/ brb3.877,
PubMed: 29568680
Riley, M. A., & Holden, J. G. (2012). Dynamics of cognition. WIREs
Cognitive Science, 3(6), 593–606. https://doi.org/10.1002/wcs
.1200, PubMed: 26305268
Ritter, P., Schirner, M., McIntosh, A. R., & Jirsa, V. K. (2013). Der
virtual brain integrates computational modeling and multimodal
neuroimaging. Gehirnkonnektivität, 3(2), 121–145. https://doi.org
/10.1089/brain.2012.0120, PubMed: 23442172
Roberts, J. A., Friston, K. J., & Breakspear, M. (2017A). Klinisch
applications of stochastic dynamic models of the brain, part I:
A primer. Biologische Psychiatrie: Cognitive Neuroscience and
NeuroImaging, 2(3), 216–224. https://doi.org/10.1016/j.bpsc
.2017.01.010, PubMed: 29528293
Roberts, J. A., Friston, K. J., & Breakspear, M. (2017B). Klinisch
applications of stochastic dynamic models of the brain, part II:
Eine Rezension. Biologische Psychiatrie: Cognitive Neuroscience and
NeuroImaging, 2(3), 225–234. https://doi.org/10.1016/j.bpsc
.2016.12.009, PubMed: 29528294
Robinson, P. A., Henderson, J. A., Gabay, N. C., Aquino, K. M.,
Babaie-Janvier, T., & Gao, X. (2021). Determination of dynamic
brain connectivity via spectral analysis. Frontiers in Human Neuro-
Wissenschaft, 15, 655576. https://doi.org/10.3389/fnhum.2021.655576,
PubMed: 34335207
Rolls, E. T., & Deco, G. (2010). The noisy brain: Stochastic dynamics
as a principle of brain function. Oxford, Vereinigtes Königreich: Oxford University
Drücken Sie. https://doi.org/10.1093/acprof:oso/9780199587865.001
.0001
Ryali, S., Shih, Y.-Y. ICH., Chen, T., Kochalka, J., Albaugh, D., Fang, Z.,
Supekar, K., Lee, J. H., & Menon, V. (2016). Combining optoge-
netic stimulation and fMRI to validate a multivariate dynamical
systems model for estimating causal brain interactions. Neuro-
Image, 132, 398–405. https://doi.org/10.1016/j.neuroimage
.2016.02.067, PubMed: 26934644
Sadeghi, S., Mier, D., Gerchen, M. F., Schmidt, S. N. L., & Hass, J.
(2020). Dynamic causal modeling for fMRI with Wilson-Cowan-
based neuronal equations. Frontiers in Neuroscience, 14, 593867.
https://doi.org/10.3389/fnins.2020.593867, PubMed: 33328865
Salehi, M., Greene, A. S., Karbasi, A., Shen, X., Scheinost, D., &
Polizist, R. T. (2020). There is no single functional atlas even
for a single individual: Functional parcel definitions change with
Aufgabe. NeuroImage, 208, 116366. https://doi.org/10.1016/j
.neuroimage.2019.116366, PubMed: 31740342
Sanz Perl, Y., Pallavicini, C., Pérez Ipiña, ICH., Demertzi, A., Bonhomme,
V., Martial, C., … Tagliazucchi, E. (2021). Perturbations in
Netzwerkneurowissenschaften
977
l
D
Ö
w
N
Ö
A
D
e
D
F
R
Ö
M
H
T
T
P
:
/
/
D
ich
R
e
C
T
.
M
ich
T
.
T
/
/
e
D
u
N
e
N
A
R
T
ich
C
e
–
P
D
l
F
/
/
/
/
/
6
4
9
6
0
2
0
5
6
2
5
5
N
e
N
_
A
_
0
0
2
3
0
P
D
T
.
F
B
j
G
u
e
S
T
T
Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3
Dynamical systems theory and human neuroimaging
dynamical models of whole-brain activity dissociate between
the level and stability of consciousness. PLoS Computational
Biology, 17(7), e1009139. https://doi.org/10.1371/journal.pcbi
.1009139, PubMed: 34314430
Sanz-Leon, P., Knock, S. A., Spiegler, A., & Jirsa, V. K. (2015).
Mathematical
framework for large-scale brain network
modeling in The Virtual Brain. NeuroImage, 111, 385–430.
https://doi.org/10.1016/j.neuroimage.2015.01.002, PubMed:
25592995
Sara, S. J., & Bouret, S. (2012). Orienting and reorienting: The locus
coeruleus mediates cognition through arousal. Neuron, 76(1),
130–141. https://doi.org/10.1016/j.neuron.2012.09.011,
PubMed: 23040811
Schirner, M., Domide, L., Perdikis, D., Triebkorn, P., Stefanovski, L.,
Pai, R., … Ritter, P. (2021). Brain modelling as a service: Der
Virtual Brain on EBRAINS. arXiv. https://arxiv.org/abs/2102
.05888v2
Schoner, G., & Kelso, J. A. (1988). Dynamic pattern generation in
behavioral and neural systems. Wissenschaft, 239(4847), 1513–1520.
https://doi.org/10.1126/science.3281253, PubMed: 3281253
Shine, J. M. (2021). The thalamus integrates the macrosystems of
the brain to facilitate complex, adaptive brain network dynamics.
Fortschritte in der Neurobiologie, 199, 101951. https://doi.org/10.1016/j
.pneurobio.2020.101951, PubMed: 33189781
Shine, J. M., Aburn, M. J., Breakspear, M., & Poldrack, R. A. (2018).
The modulation of neural gain facilitates a transition between
functional segregation and integration in the brain. Elife, 7,
e31130. https://doi.org/10.7554/eLife.31130, PubMed:
29376825
Shine, J. M., Bissett, P. G., Glocke, P. T., Koyejo, O., Balsters, J. H.,
Gorgolewski, K. J., Moodie, C. A., & Poldrack, R. A. (2016).
The dynamics of functional brain networks: Integrated network
states during cognitive task performance. Neuron, 92(2),
544–554. https://doi.org/10.1016/j.neuron.2016.09.018,
PubMed: 27693256
Shine, J. M., Breakspear, M., Glocke, P. T., Ehgoetz Martens, K. A.,
Shine, R., Koyejo, O., Spurns, O., & Poldrack, R. A. (2019A).
Human cognition involves the dynamic integration of neural
activity and neuromodulatory systems. Naturneurowissenschaften,
22(2), 289–296. https://doi.org/10.1038/s41593-018-0312-0,
PubMed: 30664771
Shine, J. M., Hearne, L. J., Breakspear, M., Hwang, K., Müller, E. J.,
Spurns, O., Poldrack, R. A., Mattingley, J. B., & Cocchi, L.
(2019B). The low-dimensional neural architecture of cognitive
complexity is related to activity in medial thalamic nuclei.
Neuron, 104(5), 849–855. https://doi.org/10.1016/j.neuron.2019
.09.002, PubMed: 31653463
Shine, J. M., Müller, E. J., Munn, B., Cabral, J., Moran, R. J., &
Breakspear, M. (2021). Computational models link cellular
mechanisms of neuromodulation to large-scale brain dynamics.
Naturneurowissenschaften, 24(6), 765–776. https://doi.org/10.1038
/s41593-021-00824-6, PubMed: 33958801
Shine, J. M., & Poldrack, R. A. (2018). Principles of dynamic network
reconfiguration across diverse brain states. NeuroImage, 180,
396–405. https://doi.org/10.1016/j.neuroimage.2017.08.010,
PubMed: 28782684
Schmied, S. M., Fuchs, P. T., Müller, K. L., Glahn, D. C., Fuchs, P. M.,
Mackay, C. E., Filippini, N., Watkins, K. E., Toro, R., Laird,
(2009). Correspondence of
A. R., & Beckmann, C. F.
Die
brain’s functional architecture during activation and rest. Pro-
ceedings of
the National Academy of Sciences, 106(31),
13040–13045. https://doi.org/10.1073/pnas.0905267106,
PubMed: 19620724
Spiegler, A., Hansen, E. C. A., Bernard, C., McIntosh, A. R., & Jirsa,
V. K. (2016). Selective activation of resting-state networks follow-
ing focal stimulation in a connectome-based network model of
das menschliche Gehirn. ENeuro, 3(5). https://doi.org/10.1523
/ENEURO.0068-16.2016, PubMed: 27752540
Spurns, Ö. (2013). Network attributes for segregation and integra-
tion in the human brain. Aktuelle Meinung in der Neurobiologie, 23(2),
162–171. https://doi.org/10.1016/j.conb.2012.11.015, PubMed:
23294553
Spurns, Ö. (2015). Cerebral cartography and connectomics. Philo-
sophical Transactions of the Royal Society of London. Serie B,
Biological Sciences, 370(1668), 20140173. https://doi.org/10
.1098/rstb.2014.0173, PubMed: 25823870
Stephan, K. E., Petzschner, F. H., Kasper, L., Bayer, J., Wellstein,
K. V., Stefanics, G., Pruessmann, K. P., & Heinzle, J. (2019).
Laminar fMRI and computational theories of brain function.
NeuroImage, 197, 699–706. https://doi.org/10.1016/j
.neuroimage.2017.11.001, PubMed: 29104148
Stringer, C., Pachitariu, M., Steinmetz, N. A., Kabel, M., Bartho, P.,
Harris, K. D., Sahani, M., & Lesica, N. A. (2016). Inhibitory con-
trol of correlated intrinsic variability in cortical networks. Elife, 5,
e19695. https://doi.org/10.7554/eLife.19695, PubMed:
27926356
Strogatz, S. H. (2015). Nonlinear dynamics and chaos: With appli-
cations to physics, biology, Chemie, and engineering (2nd ed.).
Boca Raton, FL: CRC Press.
Tkačik, G., Mora, T., Marre, O., Amodei, D., Palmer, S. E., Berry,
M. J., & Bialek, W. (2015). Thermodynamics and signatures of
criticality in a network of neurons. Verfahren des Nationalen
Akademie der Wissenschaften, 112(37), 11508–11513. https://doi.org/10
.1073/pnas.1514188112, PubMed: 26330611
Tognoli, E., & Kelso, J. A. S. (2014). The metastable brain. Neuron,
81(1), 35–48. https://doi.org/10.1016/j.neuron.2013.12.022,
PubMed: 24411730
Tort, A. B. L., Komorowski, R., Eichenbaum, H., & Kopell, N. (2010).
Measuring phase-amplitude coupling between neuronal oscilla-
tions of different frequencies. Journal of Neurophysiology, 104(2),
1195–1210. https://doi.org/10.1152/jn.00106.2010, PubMed:
20463205
Vyas, S., Golub, M. D., Sussillo, D., & Shenoy, K. V. (2020). Com-
putation through neural population dynamics. Annual Review of
Neurowissenschaften, 43(1), 249–275. https://doi.org/10.1146/annurev
-neuro-092619-094115, PubMed: 32640928
Wang, P., Kong, R., Kong, X., Liégeois, R., Orban, C., Deco, G.,
van den Heuvel, M. P., & Thomas Yeo, B. T. (2019). Inversion
of a large-scale circuit model reveals a cortical hierarchy in the
dynamic resting human brain. Science Advances, 5(1),
eaat7854. https://doi.org/10.1126/sciadv.aat7854, PubMed:
30662942
Watanabe, T., Hirose, S., Wada, H.,
Imai, Y., Machida, T.,
Shirouzu, ICH., Konishi, S., Miyashita, Y., & Masuda, N. (2013).
A pairwise maximum entropy model accurately describes
resting-state human brain networks. Nature Communications,
Netzwerkneurowissenschaften
978
l
D
Ö
w
N
Ö
A
D
e
D
F
R
Ö
M
H
T
T
P
:
/
/
D
ich
R
e
C
T
.
M
ich
T
.
/
/
T
e
D
u
N
e
N
A
R
T
ich
C
e
–
P
D
l
F
/
/
/
/
/
6
4
9
6
0
2
0
5
6
2
5
5
N
e
N
_
A
_
0
0
2
3
0
P
D
.
T
F
B
j
G
u
e
S
T
T
Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3
Dynamical systems theory and human neuroimaging
4(1), 1370. https://doi.org/10.1038/ncomms2388, PubMed:
23340410
Watanabe, T., Masuda, N., Megumi, F., Kanai, R., & Rees, G.
(2014). Energy landscape and dynamics of brain activity
during human bistable perception. Nature Communications,
5(1), 4765. https://doi.org/10.1038/ncomms5765, PubMed:
25163855
Wilson, H. R., & Cowan, J. D. (1972). Excitatory and inhibitory
interactions in localized populations of model neurons.
Biophysical Journal, 12(1), 1–24. https://doi.org/10.1016/S0006
-3495(72)86068-5, PubMed: 4332108
Wong, K.-F., & Wang, X.-J. (2006). A recurrent network mechanism
of time integration in perceptual decisions. Journal of Neurosci-
enz, 26(4), 1314–1328. https://doi.org/10.1523/jneurosci.3733
-05.2006, PubMed: 16436619
Zeeman, E. C. (1973). Catastrophe theory in brain modelling. Inter-
national Journal of Neuroscience, 6(1), 39–41. https://doi.org/10
.3109/00207457309147186, PubMed: 4792380
l
D
Ö
w
N
Ö
A
D
e
D
F
R
Ö
M
H
T
T
P
:
/
/
D
ich
R
e
C
T
.
M
ich
T
.
/
T
/
e
D
u
N
e
N
A
R
T
ich
C
e
–
P
D
l
F
/
/
/
/
/
6
4
9
6
0
2
0
5
6
2
5
5
N
e
N
_
A
_
0
0
2
3
0
P
D
.
T
F
B
j
G
u
e
S
T
T
Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3
Netzwerkneurowissenschaften
979