Investigación

Investigación

The self-organized learning of noisy environmental
stimuli requires distinct phases of plasticity

Steffen Krüppel

1,2

and Christian Tetzlaff

1,2

1Department of Computational Neuroscience, Third Institute of PhysicsBiophysics, Georg-August-University,
Göttingen, Alemania
2Bernstein Center for Computational Neuroscience, Georg-August-University, Göttingen, Alemania

Palabras clave: Synaptic plasticity, Intrinsic plasticity, Noise-robustness, Aprendiendo, Sensory pathways

un acceso abierto

diario

ABSTRACTO

Along sensory pathways, representations of environmental stimuli become increasingly
sparse and expanded. If additionally the feed-forward synaptic weights are structured
according to the inherent organization of stimuli, the increase in sparseness and expansion
leads to a reduction of sensory noise. Sin embargo, it is unknown how the synapses in the brain
form the required structure, especially given the omnipresent noise of environmental stimuli.
Aquí, we employ a combination of synaptic plasticity and intrinsic plasticity—adapting the
excitability of each neuron individually—and present stimuli with an inherent organization
to a feed-forward network. We observe that intrinsic plasticity maintains the sparseness of the
neural code and thereby allows synaptic plasticity to learn the organization of stimuli in
low-noise environments. Sin embargo, even high levels of noise can be handled after a
subsequent phase of readaptation of the neuronal excitabilities by intrinsic plasticity.
Curiosamente, during this phase the synaptic structure has to be maintained. Estos resultados
demonstrate that learning and recalling in the presence of noise requires the coordinated
interplay between plasticity mechanisms adapting different properties of the neuronal circuit.

RESUMEN DEL AUTOR

Everyday life requires living beings to continuously recognize and categorize perceived
stimuli from the environment. To master this task, the representations of these stimuli become
increasingly sparse and expanded along the sensory pathways of the brain. Además, el
underlying neuronal network has to be structured according to the inherent organization of
the environmental stimuli. Sin embargo, how the neuronal network learns the required structure
even in the presence of noise remains unknown. In this theoretical study, we show that the
interplay between synaptic plasticity—controlling the synaptic efficacies—and intrinsic
plasticity—adapting the neuronal excitabilities—enables the network to encode the
organization of environmental stimuli. It thereby structures the network to correctly
categorize stimuli even in the presence of noise. After having encoded the stimuli’s
organización, consolidating the synaptic structure while keeping the neuronal excitabilities
dynamic enables the neuronal system to readapt to arbitrary levels of noise resulting in a
near-optimal classification performance for all noise levels. These results provide new insights
into the interplay between different plasticity mechanisms and how this interplay enables
sensory systems to reliably learn and categorize stimuli from the surrounding environment.

Citación: Krüppel, S., & Tetzlaff, C.
(2020). The self-organized learning of
noisy environmental stimuli requires
distinct phases of plasticity. Red
Neurociencia, 4(1), 174–199. https://
doi.org/10.1162/netn_a_00118

DOI:
https://doi.org/10.1162/netn_a_00118

Supporting Information:
https://doi.org/10.1162/netn_a_00118

Recibió: 31 Julio 2019
Aceptado: 09 December 2019

Conflicto de intereses: Los autores tienen
declaró que no hay intereses en competencia
existir.

Autor correspondiente:
Christian Tetzlaff
tetzlaff@phys.uni-goettingen.de

Editor de manejo:
Olaf Sporns

Derechos de autor: © 2019
Instituto de Tecnología de Massachusetts
Publicado bajo Creative Commons
Atribución 4.0 Internacional
(CC POR 4.0) licencia

La prensa del MIT

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

/

/

t

mi
d
tu
norte
mi
norte
a
r
t
i
C
mi

pag
d

yo

F
/

/

/

/

/

4
1
1
7
4
1
8
6
6
8
2
2
norte
mi
norte
_
a
_
0
0
1
1
8
pag
d

.

t

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Learning of noisy stimuli requires distinct phases of plasticity

INTRODUCCIÓN

Learning to distinguish between different stimuli despite high levels of noise is an important
ability of living beings to ensure survival. Sin embargo, the underlying neuronal and synaptic
processes of this ability are largely unknown.

The brain is responsible for controlling movements of an agent’s body in response to the
perceived stimulus. Por ejemplo, the agent should run away from a predator or run after the
prey. para hacerlo, the agent needs to be able to reliably classify the perceived stimulus despite
its natural variability (p.ej., different individuals of the same predator species) or noise (p.ej.,
impaired vision by obstacles). En general, the sensory processing systems of the brain map the
stimulus representation onto subsequent brain areas yielding successive representations which
are increasingly sparse in activity and expansive in the number of neurons. If the feed-forward
synaptic weights realizing this mapping are structured according to the inherent organization of
the stimuli (p.ej., lion versus pig), the increased sparseness and expansion lead to a significant
reduction of noise and therefore to a reliable classification (Babadi & Sompolinsky, 2014).
Sin embargo, it remains unclear how the synapses form the required structure despite noise during
aprendiendo. Además, how can the system reliably adapt to varying levels of noise (p.ej., ser
in a silent forest compared with near a loud stream)?

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

/

t

/

mi
d
tu
norte
mi
norte
a
r
t
i
C
mi

pag
d

yo

F
/

/

/

/

/

4
1
1
7
4
1
8
6
6
8
2
2
norte
mi
norte
_
a
_
0
0
1
1
8
pag
d

.

t

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

In the mouse olfactory system, por ejemplo, 1, 800 glomeruli receiving signals from olfac-
tory sensory neurons project to millions of pyramidal neurons in the piriform cortex yielding
an expansion of the stimulus representation (Franks & Isaacson, 2006; Mombaerts et al., 1996).
Activity of the glomeruli is relatively dense with 10%–30% of glomeruli responding to a given
natural odor (Vincis, Gschwend, Bhaukaurally, Beroud, & Carleton, 2012), while in the piri-
form cortex activity drops to 3%–15% indicating an increase in sparseness (Poo & Isaacson,
2009; Stettler & Axel, 2009). A similar picture can be observed in the Drosophila olfactory sys-
tema. Aquí, 50 glomeruli project to about 2, 500 Kenyon cells in the mushroom body (Balling,
Technau, & Heisenberg, 1987; Jefferis et al., 2007). While about 59% of projection neurons
respond to a given odor, solo 6% of Kenyon cells do (Tornero, Bazhenov, & Laurent, 2008).
Similar ratios have been observed in the locust olfactory system (Perez-Orive et al., 2002). En
the cat visual system, the primary visual cortex has 25 times as many outputs than it receives
inputs from the LGN (Olshausen, 2003). Además, V1-responses to natural visual stimuli are
significantly sparser than in the LGN (Dan, Atick, & Reid, 1996; Vinje & Gallant, 2000). Ambos
principles of increased expansion and sparseness of stimulus representations apply to other
sensory processing systems as well (Brecht & Sakmann, 2002; Chacron, Longtin, & Maler,
2011; Deweese & Zador, 2003).

The functional roles of increased sparseness as well as expansion have already been pro-
posed in the Marr-Albus theory of the cerebellum (Albus, 1971; Marr, 1969). Aquí, diferente
representations are thought to evoke different movement responses even though the activity
patterns overlap. The Marr-Albus theory demonstrates that through expansion and the sparse
activity of granule cells, the overlapping patterns are mapped onto nonoverlapping patterns that
can easily be classified. A recent theoretical study has focused on sparse and expansive feed-
forward networks in sensory processing systems (Babadi & Sompolinsky, 2014). Aquí, pequeño
variations in activity patterns are caused by internal neuronal noise, input noise, or changes in
insignificant properties of the stimuli. For reliable stimulus classification, these slightly varying
activity patterns belonging to the same underlying stimulus should evoke the same response
in a second layer (or brain area) of a sparse and expansive feed-forward network. Asombrosamente,
although the network is sparse and expansive, random synaptic weights increase both noise
and overlap of activity patterns in the second layer. Por otro lado, the same network with

Synaptic weights:
The average transmission efficacy of
a synapse quantified as a single
number being adapted by diverse
plasticity processes.

Olfactory system:
The olfactory system includes all
brain areas processing sensory
information related to the sense of
smell.

Piriform cortex:
One brain area in the cerebrum
processing sensory information of the
sense of smell.

Mushroom body:
A brain area in insects that is
important for odor-related learning
and memory.

Cerebellum:
A brain area responsible for the
control of movement-related
functions such as coordination,
timing, or precision.

Neurociencia en red

175

Learning of noisy stimuli requires distinct phases of plasticity

Synaptic plasticity:
General term for different kinds of
biological mechanisms adapting the
weights of synapses depending on
neuronal activities.

Homeostatic synaptic plasticity:
Synaptic plasticity mechanism
adapting the synaptic weights such
that the neuronal dynamics remain in
a desired “healthy” regime.

Intrinsic plasticity:
General term for different kinds of
biological mechanisms adapting the
firing threshold or excitability of a
neurona.

synaptic weights structured according to the organization of stimuli reduces the noise and
overlap of activity patterns, simplifying subsequent classification. How a network is able to
learn the organization of stimuli, shape its synaptic structure according to this organization,
and do so even in the presence of noise is so far unknown.

The generally accepted hypothesis of learning is that it is realized by changes of synap-
tic weights by the process of (long-term) synaptic plasticity (Hebb, 1949; Martín, Grimwood,
& morris, 2000). Synaptic weights are strengthened or weakened depending on the activity
of the pre- and postsynaptic neurons (Bi & Poo, 1998; Bliss & Lømo, 1973; Markram, Lübke,
Frotscher, & Sakmann, 1997). Hebbian plasticity describes the process of increasing a synaptic
weight if the activity of the two connected neurons is correlated (Hebb, 1949). Several theoret-
ical studies indicate that Hebbian plasticity alone would lead to divergent synaptic and neu-
ronal dynamics, thus requiring homeostatic synaptic plasticity (Triesch, Vo, & Hafner, 2018;
GRAMO. GRAMO. Turrigiano, Leslie, Desai, Rutherford, & nelson, 1998) to counterbalance and stabilize
the dynamics (Molinero & MacKay, 1994; Tetzlaff, Kolodziejski, Hora, & Wörgötter, 2011; Yger
& Gilson, 2015; Zenke & Gerstner, 2017; Zenke, Hennequin, & Gerstner, 2013). Además,
neurons adapt their excitability by the process of intrinsic plasticity (Triesch, 2007; zhang &
Linden, 2003). Intrinsic plasticity regulates the excitability of a given neuron so as to main-
tain a desired average activity (Benda & Hertz, 2003; Desai, Rutherford, & Turrigiano, 1999;
LeMasson, Marder, & Abbott, 1993; GRAMO. Turrigiano, Abbott, & Marder, 1994), which leads to,
por ejemplo, the optimization of the input-output relation of a neuron (Triesch, 2007) or the
encoding of information in firing rates (votante & Koch, 1999). Several theoretical studies in-
dicate that the interplay of intrinsic plasticity with synaptic plasticity allows neuronal systems
to infer the stimulus intensity (Monk, Savin, & Lücke, 2018; Monk, Savin, & Lücke, 2016), a
perform independent component analysis (Savin, Joshi, & Triesch, 2010), or to increase their
computational capacity (Hartman, Lázaro, Nessler, & Triesch, 2015; Lázaro, Pipa, & Triesch,
2009). Sin embargo, it remains unclear whether this interplay allows sensory systems, on the one
mano, to learn the organization of stimuli despite noise and, por otro lado, to adapt to
variations of the noise level.

En el presente estudio, we show that in an expansive network intrinsic plasticity regulates the
neuronal activities such that the synaptic weights can learn the organization of stimuli even
in the presence of low levels of noise. Curiosamente, after learning, the system is able to adapt
itself according to changes in the level of noise it is exposed to—even if these levels are high.
para hacerlo, intrinsic plasticity has to readapt the excitability of the neurons while the synaptic
weights have to be maintained, indicating the need of a two-phase learning protocol.

En el siguiente, primero, we present the basics of our theoretical model and methods and
demonstrate the ability of a feed-forward network with static random or static structured synap-
tic weights, respectivamente, to distinguish between noisy versions of 1, 000 different stimuli (similar
to Babadi & Sompolinsky, 2014). Entonces, we introduce the synaptic and intrinsic plasticity rules
considered in this study. We train the plastic feed-forward network during an encoding phase
without noise and test its performance afterwards by presenting stimuli of different noise lev-
los. Intriguingly, the self-organized dynamics of synaptic and intrinsic plasticity yield a perfor-
mance and network structure similar to the static network initialized with structured synaptic
weights. Further analyses indicate that the performance of the plastic network to classify noisy
stimuli greatly depends on the neuronal excitability, especially for high levels of noise. Por eso,
after learning without noise, we changed the noise level in order to test the performance but
let intrinsic plasticity readapt the excitability of the neurons. This readaptation phase signifi-
cantly increases the performance of the network. Nota, sin embargo, if synaptic plasticity is present

Neurociencia en red

176

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

t

/

/

mi
d
tu
norte
mi
norte
a
r
t
i
C
mi

pag
d

yo

F
/

/

/

/

/

4
1
1
7
4
1
8
6
6
8
2
2
norte
mi
norte
_
a
_
0
0
1
1
8
pag
d

.

t

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Learning of noisy stimuli requires distinct phases of plasticity

during this second phase, the increase in performance is impeded by a prolonged and severe
performance decrease. In the next step, we show that in the encoding phase with both intrinsic
and synaptic plasticity the network can also learn from noisy stimuli if the level of noise is low.
De nuevo, high levels of noise impede learning and classification performance. Curiosamente, después
the subsequent readaptation phase the network initially trained with low-noise stimuli per-
forms just as well as a network trained with noise-free stimuli, demonstrating the robustness
of this learning mechanism to noise.

RESULTADOS

Model Setup and Classification Performance

The main question of this study concerns how sparse and expansive neural systems, semejante
as sensory processing areas, learn the inherent organization of stimuli enabling a reduction
of noise. To tackle this question, similar to a previous study (Babadi & Sompolinsky, 2014),
we consider a neural network that consists of two layers of rate-based neurons, with the first
layer being linked to the second layer via all-to-all feed-forward synaptic connections. el primero
capa, called stimulus layer, is significantly smaller (NS = 1, 000 neuronas) than the second one,
called cortical layer (NC = 10, 000 neuronas). The activity patterns of the stimulus layer serve
as stimuli or inputs to the cortical layer. These stimulus patterns are constructed of firing rates
Si ∈ {0, 1} of the stimulus neurons i ∈ {1, …, NS} con 0 representing a silent neuron and 1 a
maximally active one. Neurons belonging to the cortical layer posses a membrane potential
uj (j ∈ {1, …, CAROLINA DEL NORTE}) modeled by a leaky integrator receiving the inputs from the stimulus layer.
The membrane potential of a cortical neuron is transformed into a firing rate Cj using a sig-
moidal transfer function. Similar to the stimulus neurons, we consider the minimal and max-
imal firing rates F min = 0 and F max = 1. Note that the point of inflection of the sigmoidal
transfer function ε j, also called cortical firing threshold, is neuron-specific.

¯S ν

The different activity patterns of the stimulus layer are organized into P = 1, 000 stimulus
grupos (Figura 1A). Each stimulus cluster ν ∈ {1, …, PAG} consists of one characteristic activity
patrón, called central stimulus pattern
, which represents the underlying stimulus (p.ej., a
lion; black dots in the stimulus layer’s phase space in Figure 1A). To construct these central
stimulus patterns, for each cluster ν and each stimulus neuron i we randomly choose a firing
¯S ν
i ∈ {0, 1} with equal probability, thus resulting in random patterns of ones and zeros (ver
tasa
Figure 1B for schematic examples). Además, a stimulus cluster contains all noisy versions
S ν
of the underlying stimulus (p.ej., a lion behind a tree or a rock; indicated by blue halos in
Figura 1A) generated by randomly flipping firing rates
i of the cluster’s central stimulus pattern
de 1 a 0 or vice versa with probability ΔS/2 (Figura 1B); ΔS thus reflects the average noise
level of all noisy stimulus patterns as well as the stimulus cluster’s size in the stimulus layer’s
phase space. If ΔS = 0, the cluster is only a single point in the stimulus layer’s phase space
(the central stimulus pattern
) and is thus noise-free. The maximum value of the stimulus
cluster size ΔS = 1 represents a cluster that is distributed evenly across the entire phase space.
Aquí, the noise is so strong that no information remains. The stimulus cluster size ΔS can be
retrieved by the normalized Hamming distance between patterns of the same cluster:

¯S ν

¯S ν

(cid:2)

ΔS =

∑NS

i − ¯S ν
i |

yo=1 |S ν
NS · 1/2

(cid:3)

,

S ν,norte

(1)

with the brackets denoting the average over all noisy stimulus patterns S ν
clusters ν.

of all stimulus

Neurociencia en red

177

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

/

t

/

mi
d
tu
norte
mi
norte
a
r
t
i
C
mi

pag
d

yo

F
/

/

/

/

/

4
1
1
7
4
1
8
6
6
8
2
2
norte
mi
norte
_
a
_
0
0
1
1
8
pag
d

t

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Learning of noisy stimuli requires distinct phases of plasticity

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

/

/

t

mi
d
tu
norte
mi
norte
a
r
t
i
C
mi

pag
d

yo

F
/

/

/

/

/

4
1
1
7
4
1
8
6
6
8
2
2
norte
mi
norte
_
a
_
0
0
1
1
8
pag
d

.

t

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

¯S ν

(black dots) and noisy patterns S ν

Cifra 1. Network model and mathematical approach to quantify the ability of an expansive, sparse network to reduce noise. (A) The feed-
forward network consists of two layers of rate-based neurons with the stimulus layer projecting stimuli onto the cortical layer via all-to-all
feed-forward synaptic connections. Stimuli are organized in P = 1, 000 grupos, with each cluster ν consisting of a characteristic central
(blue halos around dots). The size of the stimulus clusters ΔS corresponds to the level of noise
patrón
and is indicated schematically by the size of the blue halos. Stimulus clusters are mapped by the synaptic connections to cortical clusters
. (B) Illustration of different patterns and measures used in this study. El
containing central cortical patterns
of each stimulus
activity of each neuron (box) is indicated by its gray scale (izquierda: stimulus layer; bien: cortical layer). The central pattern
in the cortical layer. Noisy versions of a central stimulus pattern (here S 2
)
grupo (underlying stimulus) evokes a specific central pattern
activate different cortical patterns with their average distance Δc from the original pattern depending on the structure of the feed-forward
synaptic weights. (C) Random synaptic weights increase the cluster size for all stimulus cluster sizes, eso es, ΔC > ΔStest; the noise in the
stimuli is thus amplified by the network. (D) Synapses that are structured in relation to the organization of the underlying stimuli (stimulus
) decrease the size of clusters, eso es, the noise, up to a medium noise level (ΔStest ≈ 0.45). (C, D) Dashed line indicates
central patterns
ΔC = ΔStest.

and noisy cortical patterns C ν

¯S ν

¯S ν

¯C ν

¯C ν

Every activity pattern of the stimulus layer elicits an activity pattern in the cortical layer, semejante
that stimulus clusters are mapped to cortical clusters (dashed arrows in Figure 1A). Similar to
the stimulus cluster, each cortical cluster consists of one central pattern
(evoked by the
noise-free stimulus
). debido a la
complex mapping of the stimulus patterns onto the cortical layer via the feed-forward synaptic
weights, it is not clear how the level of noise is affected by this mapping. Por lo tanto, we estimate
the noise in the cortical layer in analogy to Equation 1:

¯C ν
(evoked by the noisy stimuli S ν

) and noisy patterns C ν

¯S ν

Δc =

(cid:2)

(cid:3)

j=1 |C ν

∑NC
j − ¯C ν
j |
NC · Z(C ν, ¯C ν)

,

C ν,norte

(2)

where Z(C ν, ¯C ν) is a normalization factor (see Methods for more details). As different stim-
ulus clusters are mapped by the same feed-forward weights onto cortical clusters, aleatorio

Neurociencia en red

178

Learning of noisy stimuli requires distinct phases of plasticity

correlations between the cortical clusters could be induced. To account for these correlations
we calculate the average distance between clusters by

dC =

(cid:2)

∑NC
j − ¯C λ
j=1 | ¯Cκ
j |
NC · Z( ¯Cκ, ¯C λ)

(cid:3)

,

κ,λ

(3)

and correct Equation 2 using this cortical cluster distance (Ecuación 3) analogous to a signal-
to-noise/noise-to-signal ratio to obtain the cortical cluster size

ΔC =

Δc
dC

.

(4)

Por lo tanto, if each pattern S ν

of a stimulus cluster ν is mapped onto a different (aleatorio)
pattern C ν
in the cortical layer, ΔC = 1 and the cluster is distributed evenly over the entire
cortical layer’s phase space. If each pattern of a stimulus cluster is mapped onto the same
pattern of the cortical cluster (the central pattern

), ΔC = 0.

¯C ν

En resumen, both the stimulus cluster size ΔS as well as the cortical cluster size ΔC are
measures for the amount of random fluctuations of different activity patterns belonging to the
same underlying stimulus. Tal como, a network tasked with reducing these random fluctuations
should decrease the cluster size, eso es, ΔC < ΔS. Static Networks Central to the performance in reducing the cluster size or noise are the feed-forward synaptic weights ωji between neurons. In the following, we predefine the synaptic weights and test the performance of the network for different levels of noise ΔStest while keeping the synaptic weights fixed. For each noise level ΔStest, we create noisy stimulus patterns for all clusters and use them to evaluate the average noise ΔC in the cortical layer. By doing so, we obtain a performance curve ΔC(ΔStest) of the network. If the weights are initialized randomly, here NS), the cortical cluster size ΔC is always larger drawn from a Gaussian distribution N (0, 2/ than the stimulus cluster size ΔStest, as the performance curve (red line in Figure 1C) is above the identity line (ΔC = ΔStest; dashed line) for all values of ΔStest. In other words, the noise of the stimuli (ΔStest) is amplified by the network by increasing the variations between different cortical patterns of the same underlying stimulus (ΔC > ΔStest). Note that this amplification
of noise is present even though the network is expansive and sparse (Babadi & Sompolinsky,
2014).

This picture changes if the weights are structured according to the organization of the en-
vironmental stimuli. To portray such a structure, we initialize the synaptic weights according
to Babadi and Sompolinsky (2014) and Tsodyks and Feigelman (1988):

ωji =

100
NS

PAG

ν=1

( ¯S ν

i − 1/2)(

j − F T) .

(5)

Note that the factor 100 ensures that the synaptic weights are in the same order of magnitude
as in later analyses. Ecuación 5 results in a mapping of the central stimulus patterns
a
(F T = 0.001). Curiosamente, this mapping
randomly generated, F T
yields a reduction of noise for up to medium levels (ΔStest (cid:2) 0.45) such that the cortical
cluster size ΔC is smaller than the stimulus cluster size ΔStest (Figure 1D). En otras palabras, como
already shown in a previous study (Babadi & Sompolinsky, 2014), a structured network reduces

-sparse cortical patterns Rν

¯S ν

Neurociencia en red

179

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

t

/

/

mi
d
tu
norte
mi
norte
a
r
t
i
C
mi

pag
d

yo

F
/

/

/

/

/

4
1
1
7
4
1
8
6
6
8
2
2
norte
mi
norte
_
a
_
0
0
1
1
8
pag
d

t

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Learning of noisy stimuli requires distinct phases of plasticity

small fluctuations of representations of the same underlying stimulus. Note that in the random
as well as the structured network each cortical neuron has an individual firing threshold ε j.
Neuron-specific thresholds are required in order to ensure that every cortical neurons’ average
response to the central stimulus patterns equals the target activity; eso es, (cid:5) ¯C ν
for all j.
We chose F T = 0.001 as this results in all cortical neurons of the structured network firing in
response to exactly one central stimulus pattern, and remaining silent in response to all others
(as F T P = 1), which simplifies the qualitative analysis of the results. In the structured network,
the method used for initializing the firing thresholds of each cortical neuron places them at
the center of the strongest and second strongest membrane potentials evoked by the central
stimulus patterns.

j (cid:6)ν = F T

These results show that expansive and sparse networks reduce the noise of stimuli if the
synaptic weights from the stimulus to the cortical layer are structured according to the under-
lying organization of stimuli (here according to the central stimulus patterns
). Hasta ahora, nosotros
have used Equation 5 to artificially set the synaptic weights to the correct values. The question
remains how a network can learn these values from the environmental stimuli.

¯S ν

Plastic Network

As demonstrated above, a network with random synaptic weights increases the level of noise,
while a structured network decreases it (Figura 1C, D). How can a network develop this
structure in a self-organized manner given only the environmental stimuli? To investigate this
pregunta, we initialized a network with the same random synaptic weights as above, eso es,
Gaussian distributed ωji, and let the system evolve over time using plasticity mechanisms that
adapt the synaptic weights and neuronal excitabilities. These plasticity mechanisms are as-
sumed to depend on local quantities only and thus on the directly accessible neuronal activities
and synaptic weights (Gerstner & Kistler, 2002; Tetzlaff et al., 2011). Given this assumption,
the environmental stimuli influence the dynamics of the plasticity mechanisms as the stimulus
patterns determine the activities of the neurons. We consider two plasticity processes: Synap-
tic weights are controlled by Hebbian correlation learning and an exponential decay term
(for weight stabilization),

˙ωji = μSiCj − ηωji ,

(6)

while a faster intrinsic plasticity mechanism regulates the firing thresholds ε j of the cortical

neurons so as to achieve the target firing rate F T = 0.001:

˙ε j = κ(Cj − F T),

(7)

with the parameters μ, η, κ determining the timescales of the mechanisms. Similar to pre-
vious studies (Lazar et al., 2009; Miner & Triesch, 2016; Triesch, 2007), we consider that the
process of intrinsic plasticity is faster than synaptic plasticity.

¯S ν

Training is carried out in repeated learning steps or trials. In each learning step L, we present
(ν ∈ {1, …, PAG}) to the network once, ensuring there is no
all central stimulus patterns
chronological information (see Methods for details). This corresponds to a stimulus cluster
= 0 or noise-free learning. At different stages of learning (eso es, after different
size ΔS
numbers of learning steps), we test the performance of the network for different levels of noise
ΔStest as has been done for the static networks.

aprender

Neurociencia en red

180

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

/

/

t

mi
d
tu
norte
mi
norte
a
r
t
i
C
mi

pag
d

yo

F
/

/

/

/

/

4
1
1
7
4
1
8
6
6
8
2
2
norte
mi
norte
_
a
_
0
0
1
1
8
pag
d

.

t

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Learning of noisy stimuli requires distinct phases of plasticity

As learning progresses (Figura 2A), the performance curve develops from the random net-
work’s (red line), which amplifies stimulus noise, into one similar to the structured network’s
performance curve (blue compared with magenta line). The plasticity mechanisms (Ecuaciones 6
y 7) enable the network to encode the organization of the stimuli (existence of different

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

/

t

/

mi
d
tu
norte
mi
norte
a
r
t
i
C
mi

pag
d

yo

F
/

/

/

/

/

4
1
1
7
4
1
8
6
6
8
2
2
norte
mi
norte
_
a
_
0
0
1
1
8
pag
d

t

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

¯S ν

Cifra 2. Self-organization of the synaptic and neuronal structure via synaptic and intrinsic plas-
ticity in a noise-free environment. (A) By repeatedly presenting one stimulus pattern S ν
per cluster
= 0 (es decir., presenting the central stimulus
per learning step L using a stimulus cluster size ΔS
aprender
), the network’s performance develops from the noise-amplification of a random network
patrones
(rojo, equal to Figure 1C) to a performance significantly decreasing the level of noise for ΔStest up to
acerca de 0.6 (azul). (B, C) During learning, the synaptic weights develop into a bimodal distribution (B;
only the weights connecting to neuron 1 are shown) that is correlated to the distribution of the static,
structured network (C). (D) For each cortical neuron (here shown for neuron 1), the firing threshold
(verde) increases such that only one central stimulus pattern can evoke a membrane potential larger
than the threshold (red lines depict membrane potentials). (mi) Similar to the synaptic weights (C),
the firing thresholds tend to become correlated to the ones of the static, structured network.

Neurociencia en red

181

Learning of noisy stimuli requires distinct phases of plasticity

grupos) in a self-organized manner, with most of the performance gained in the first L =
60, 000 learning steps: During learning, the synaptic weights evolve from the initial Gaussian
distribution into a bimodal distribution with peaks at about 0.033 y 0 (see Figure 2B for
an example). The emergence of the bimodal weight distribution and its link to the network
performance can be explained as follows: Because of the random initialization of the synap-
tic weights, each central stimulus pattern leads to a different membrane potential in a given
cortical neuron such that all P stimuli together yield a random distribution of evoked mem-
brane potentials (ver, p.ej., Figura 2D; red lines depict membrane potentials). As the target
firing rate is chosen such that each neuron ideally responds to only one central stimulus pat-
tern (as F T P = 1), intrinsic plasticity adapts the firing threshold ε j of a neuron such that one of
the evoked membrane potentials leads to a distinctly above-average firing rate. Como consecuencia,
synapses connecting stimulus neurons being active at the corresponding stimulus pattern with
the considered cortical neuron are generally strengthened the most by Hebbian synaptic plas-
ciudad. These synapses will likely form the upper peak of the final synaptic weight distribution
(Figura 2B). Mientras tanto, all other synaptic weights are dominated by the synaptic weight decay
(second term in Equation 6) and will later form the lower peak of the distribution at zero. Como
the continued differentiation of the synaptic weights increases the evoked membrane potential
of the most influential central stimulus pattern, these two processes of synaptic and neuronal
adaptation drive each other. Curiosamente, the resulting synaptic weights are correlated to the
structured synapses (Figura 2C) initialized using Equation 5 (here the cortical patterns Rν
de
Ecuación 5 were generated using the central cortical patterns C ν
of the plastic network at the
corresponding learning step L; see Methods for further details). Note that the cortical firing
thresholds ε j of the plastic network become correlated to the values of the static, structured
one as well (Figura 2E).

En resumen, synaptic and intrinsic plasticity interact and adapt the neuronal network such
eso, in a noise-free environment, it learns to encode the organization of the stimuli in a way
comparable to a static, prestructured network. The trained network is then able to reduce the
noise of environmental stimuli even for noise levels up to about 0.6.

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

t

/

/

mi
d
tu
norte
mi
norte
a
r
t
i
C
mi

pag
d

yo

F
/

/

/

/

/

4
1
1
7
4
1
8
6
6
8
2
2
norte
mi
norte
_
a
_
0
0
1
1
8
pag
d

.

t

The Functional Role of the Cortical Firing Thresholds

While being structurally similar, the performance of the trained, plastic network (Figura 2A,
azul) appears significantly better than the performance of the static, structured network (mamá-
genta). This fact is not self-explanatory, since both the synaptic weights as well as the cortical
firing thresholds are strongly correlated between both networks (Figura 2C, mi). Sin embargo, a
closer look at the cortical firing thresholds and their link to the performance of the network
reveals the cause of this difference:

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

In the trained network (L = 200, 000), as mentioned before, each cortical neuron should fire
in response to the central stimulus pattern
of exactly one cluster and stay silent otherwise.
Como ejemplo, we will focus on cortical neuron j = 1, which fires in response to the central
of cluster ν = 842 and remains silent in response to all other central
stimulus pattern
stimulus patterns. En general, two types of errors can occur.

¯S 842

¯S ν

False negatives (a stimulus of cluster 842 is presented and cortical neuron 1 falsely does
not fire): Noisy patterns of cluster 842 elicit a distribution of membrane potentials in corti-
cal neuron 1 (Figura 3A), which depends on the stimulus cluster size ΔStest, eso es, the level
of noise. All noisy stimulus patterns S 842
that evoke a membrane potential in neuron 1 eso es
higher than the neuron’s firing threshold ε1 result in a strong activation of neuron 1. The neuron

Neurociencia en red

182

Learning of noisy stimuli requires distinct phases of plasticity

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

t

/

/

mi
d
tu
norte
mi
norte
a
r
t
i
C
mi

pag
d

yo

F
/

/

/

/

/

4
1
1
7
4
1
8
6
6
8
2
2
norte
mi
norte
_
a
_
0
0
1
1
8
pag
d

.

t

Cifra 3. The Classification performance of each neuron depends on its firing threshold. In a single cortical neuron (here neuron j = 1),
multiple noisy stimulus patterns of the same stimulus cluster elicit a distribution of membrane potentials. Two distinct distributions can be
identified: (A) The distribution of membrane potentials evoked by noisy stimulus patterns belonging to the cluster whose central pattern elicits
firing in the given cortical neuron (azul; here cluster ν = 842). For any ΔStest, all stimuli yielding a membrane potential that is below the
neuron’s firing threshold (dashed line; ε1) do not elicit a strong neuronal response representing false negatives. The distribution significantly
depends on the level of noise ΔStest. (B) The membrane potential distribution in response to noisy stimulus patterns of the clusters the neuron
is not tuned to (norte (cid:7)= 842). Aquí, all stimuli yielding a membrane potential above the firing threshold are false positives. (C) ΔStest = 0: A higher
firing threshold ε leads to more false negatives (naranja) but fewer false positives (magenta) and vice versa for a lower threshold. The sum of
errores (dashed red) is negligible in a large regime (blue area: gradient is less than 0.001). (D) ΔStest = 0.7: With higher levels of stimulus noise,
the total error and the classification performance depend critically on the firing threshold. (C, D) ε1,opt: optimal value of the firing threshold
for the given level of noise ΔStest yielding the lowest total error; ε1: value of the firing threshold after learning with noise-free stimuli (ΔStest;
Cifra 2); ε1,stat: firing threshold in the static network (Figure 1D).

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

therefore classifies these S 842
correctly as belonging to cluster 842. Sin embargo, noisy patterns
S 842
evoking a lower membrane potential than ε1 do not elicit strong activation of cortical
neurona 1. These noisy patterns are falsely classified as not belonging to cluster 842 and corre-
spond to false negatives.

False positives (a stimulus of a cluster ν (cid:7)= 842 is presented and cortical neuron 1 falsely
fires): Similar to the analysis of false negatives, the analysis of false positives can be done with

Neurociencia en red

183

Learning of noisy stimuli requires distinct phases of plasticity

clusters whose central patterns should not elicit activity in cortical neuron 1. The distribution
of membrane potentials evoked by noisy patterns of these clusters does not significantly de-
pend on the stimulus cluster size ΔStest (Figura 3B). Noisy stimulus patterns S ν
(norte (cid:7)= 842) son
classified correctly as not part of cluster 842 if neuron 1’s membrane potential is lower than
its firing threshold ε1. All noisy patterns evoking a higher membrane potential falsely lead to a
firing of cortical neuron 1. They correspond to false positives.

Both false positives and false negatives depend on the firing threshold ε j of a neuron j.
For all values of ΔStest, a lower firing threshold would generally lead to less false negatives
(mi
fn,j; Figura 3A) but simultaneously to more false positives (mi
fp,j; Figura 3B) and vice versa for
a higher firing threshold. Como consecuencia, there is a trade-off between false negatives and false
positives with their sum being related to the network’s performance or cortical cluster size
(see Methods for derivation):

ΔC ≈ e

tot,j = e

fn,j + mi

fp,j

∀j .

(8)

The performance of the network or the total error e

tot,j thus depends on a cortical neuron’s
firing threshold in a nonlinear manner. Given noise-free stimuli (ΔStest = 0), in a large regime
of different values for the firing threshold cortical neuron 1 makes almost no classification error
(dashed red line in Figure 3C; gradient in shaded blue area is less than 0.001). For a higher noise
nivel (p.ej., ΔStest = 0.7, Figura 3D), there is no such extended regime of low-error threshold
valores. En cambio, small variations of the firing threshold can drastically change the classification
actuación, since the membrane potential response distributions overlap at these noise levels
(Figura 3A, B).

aprender

During training without noise (ΔS

= 0), the neuronal firing threshold ε1 rose to the
lower bound of the low-error regime of ΔStest = 0 (blue area; Figura 3C). In the static network,
sin embargo, firing thresholds ε1,stat were placed at the center of the highest and second highest
membrane potentials in response to central stimulus patterns, leading to much higher values.
Por lo tanto, if the network performance is tested for small stimulus clusters (low noise ΔStest;
Figura 3C), the static and the plastic network have a similar total error and classification per-
rendimiento. For larger stimulus clusters (high noise levels ΔStest; Figura 3D), por otro lado,
the higher firing thresholds of the static network lead to considerably more misclassification
and consequently to a higher cortical cluster size ΔC. Como consecuencia, the fact that the relation
between the threshold and its classification error ej,tot depends on the noise ΔStest provides an
explanation for the large performance differences between the static structured and the plastic
network (Figura 2A).

This example (Cifra 3) demonstrates that the value of the neuron-specific threshold ε j,opt
optimizing a neuron’s classification performance depends on the stimulus cluster size ΔStest or
current level of noise (dotted lines in Figure 4A for neuron 1 in blue and neuron 2 in green). El
firing thresholds after training (solid lines in Figure 4A), sin embargo, are independent of ΔStest, como
= 0). For ΔStest (cid:2) 0.5 estos
they are determined by the noise present during training (ΔS
thresholds are within the regime of low total error (shaded areas indicate the low-error regime
for each neuron marked by blue area in Figures 3C and 3D) yielding a high classification
performance of the network. Sin embargo, for ΔStest (cid:3) 0.5 the thresholds ε j resulting from training
= 0) start to deviate significantly from the optimal thresholds ε j,opt,
without noise (ΔS
leading to a decreasing classification performance (Figure 2A and Figure 4C, solid lines for
total error of individual neurons). Curiosamente, the deviation from the optimal threshold is
accompanied by a decrease of the average activity level (solid lines; Figura 4B), mientras que la

aprender

aprender

Neurociencia en red

184

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

/

t

/

mi
d
tu
norte
mi
norte
a
r
t
i
C
mi

pag
d

yo

F
/

/

/

/

/

4
1
1
7
4
1
8
6
6
8
2
2
norte
mi
norte
_
a
_
0
0
1
1
8
pag
d

.

t

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Learning of noisy stimuli requires distinct phases of plasticity

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

t

/

/

mi
d
tu
norte
mi
norte
a
r
t
i
C
mi

pag
d

yo

F
/

/

/

/

/

4
1
1
7
4
1
8
6
6
8
2
2
norte
mi
norte
_
a
_
0
0
1
1
8
pag
d

t

.

Cifra 4. A second learning phase—the readaptation phase—enables the neuronal system to readapt to arbitrary noise levels using intrinsic
plasticity. (C.A) After learning without noise, a second learning phase with the noise level ΔStest and only intrinsic plasticity active enables the
thresholds to readapt from the values after the first learning phase ε j (solid lines) to adapted values ε j,adapt (dashed lines), close to the optimal
threshold values ε j,opt (dotted lines) increasing performance. Blue: neurona 1; verde: neurona 2. (A) ΔStest-dependency of cortical thresholds;
shaded areas indicate regimes of low error gradient (Figura 3C); (B) ΔStest-dependency of average activities; (C) ΔStest-dependency of total
error (dashed lines lie on top of dotted lines). Solid red line shows performance of whole network (from Figure 2A), confirming Equation 8. (D)
If synaptic plasticity is present during the second learning phase as well, ΔC initially drops because of intrinsic plasticity and then increases
with ongoing presentation of noisy stimuli, indicating a disintegration of the synaptic structure (solid lines; different colors represent different
noise levels). Dashed lines indicate ΔC-values for a second learning phase with intrinsic plasticity alone.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

optimal thresholds would keep the cortical activity close to the target activity F T = 0.001
(dotted lines; for ΔStest (cid:3) 0.85 the total error is high and nearly independent of the threshold;
see Supplementary Figure 1). We thus expect that after initial learning, intrinsic plasticity could
readapt the neuronal firing thresholds according to the present level of noise such that the target
activity is maintained and the thresholds ε j,adapt approximate the optimal threshold values ε j,opt.

We therefore considered a second learning phase, the readaptation phase, which is con-
ducted after the initial training or encoding phase is completed. In the readaptation phase,
the stimulus cluster size will be the same that the performance is tested for, eso es, ΔStest. Para
now, synaptic plasticity is deactivated as we will only focus on intrinsic plasticity adapting the

Neurociencia en red

185

Learning of noisy stimuli requires distinct phases of plasticity

cortical firing thresholds ε j,adapt. To implement this readaptation phase, after the first learning
phase is completed, we repeatedly presented one noisy pattern S ν
per cluster using a stimulus
cluster size ΔStest. Threshold adaptation was stopped when the mean of all cortical thresholds
changed by less than 0.0001% in one step, which resulted in less than 7, 000 steps for each
ΔStest. As expected, intrinsic plasticity adjusts the firing thresholds during this second phase so
as to achieve the target firing rate F T
for all ΔStest (dashed lines; Figura 4B). Además, el
adapted thresholds ε j,adapt (dashed lines; Figura 4A) are similar to the optimal thresholds ε j,opt
(dotted lines). This leads to a near-optimal classification performance, which is considerably
better than without a readaptation phase (Figura 4C, dashed lines lie on top of dotted line).

En tono rimbombante, if synaptic plasticity is also present during this second learning phase, ΔC in-
creases dramatically with ongoing readaptation (solid lines in Figure 4D; different colors rep-
resent different noise levels). The initial drop of ΔC is due to intrinsic plasticity (dashed lines
show final ΔC-values for intrinsic plasticity alone), while synaptic plasticity leads to a pro-
longed deterioration of the previously learned synaptic structure if stimuli are too noisy. Nosotros
therefore conclude that the network has to maintain the synaptic weight structure during the
readaptation phase, which we recreate by turning synaptic plasticity off. By doing so, the neu-
ronal system can reliably adjust to stimuli of various noise levels using intrinsic plasticity for
adapting the excitability of neurons.

Plastic Networks in Noisy Environments

aprender

Up to now, we have shown that a sparse, expansive network can learn the underlying organiza-
= 0) by means of synaptic and intrinsic plasticity. Afterwards,
tion of noise-free stimuli (ΔS
a readaptation phase with intrinsic plasticity alone enables the network to readapt to any ar-
bitrary level of noise ΔStest (Figures 4A–C). Sin embargo, if synaptic plasticity is active during
the readaptation phase, the noise of stimuli leads to a disintegration of the synaptic structure
(Figura 4D). Por lo tanto, it is unclear whether the network can also learn the organization of
stimuli from noisy—instead of noise-free—stimuli by using synaptic plasticity.

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

/

/

t

mi
d
tu
norte
mi
norte
a
r
t
i
C
mi

pag
d

yo

F
/

/

/

/

/

4
1
1
7
4
1
8
6
6
8
2
2
norte
mi
norte
_
a
_
0
0
1
1
8
pag
d

.

t

aprender

aprender

phase (es decir., ΔS
each learning step L using a stimulus cluster size ΔS
ΔS

To test this, we now investigate the effect of noisy stimuli during training in the encoding
> 0). para hacerlo, we present one noisy stimulus pattern S ν
per cluster in
aprender. In noisy environments with up to
= 0.2, cortical neurons show neuronal and synaptic dynamics (Figure 5A, B) similar to
noise-free learning (Figura 2B, D). Synaptic weights and firing thresholds become correlated
to the static, structured network (Figure 5E, F) to a comparable degree (Figura 2C, mi). Never-
theless, because of the noise of the stimuli, some cortical neurons do not manage to separate
one stimulus cluster from all others (Figure 5D, 24% of all neurons for ΔS
= 0.2). Estafa-
sequently, multiple clusters trigger the Hebbian term of synaptic plasticity (Ecuación 6) semejante
that all synaptic weights approach a medium value (Figure 5C). These synaptic weights dimin-
ish the correlation to the static, structured synaptic weights as the final distribution is slightly
broader (Figure 5E) than the one from learning without noise (Figura 2C). Además, el
cortical neurons without structured incoming synaptic weights (unimodal weight distribution)
on average have a lower final firing threshold (blue outliers in Figure 5F).

aprender

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

En general, low levels of noise (ΔS

(cid:2) 0.25) are tolerated by the network without large
losses in performance (Figure 6A). The failed-learning cortical neurons (Figure 5C, D), cual
become more with higher noise levels (see Supplementary Figure 3), have a negative effect on
(cid:3) 0.25, the noise is so strong that the system is
the performance of the network. At ΔS
not able to recognize and learn the underlying organization of stimuli (eso es, the existence of

aprender

aprender

Neurociencia en red

186

Learning of noisy stimuli requires distinct phases of plasticity

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

Cifra 5. Self-organization of the synaptic and neuronal structure in a noisy environment. The dynamics of synaptic and intrinsic plasticity
= 0.2). (A, B)
enable the sparse, expansive network to learn the underlying organization of stimuli from noisy stimulus patterns (here ΔS
The majority of cortical neurons develop a distribution of incoming synaptic weights (A, blue lines) and membrane potential responses (B,
red lines) similar to the ones learning without noise (Figura 2B, D). Here shown for neuron 2. Green line in (B) denotes the threshold. (C,
D) Sin embargo, the noise prevents some neurons ( 24%) to form a proper synaptic structure (C), yielding a firing threshold (D) that does
not separate the membrane potential evoked by one cluster from the others. Por lo tanto, these neurons are not tuned to one specific cluster.
Here shown for neuron 1. (mi, F) En general, the network trained by noisy stimuli develops synaptic weights (mi) and firing thresholds (F) similarmente
correlated to the static, structured network than the network trained without noise (Figura 2C, mi). The few neurons that failed learning lead to
a minor broadening of the distributions.

aprender

t

/

/

mi
d
tu
norte
mi
norte
a
r
t
i
C
mi

pag
d

yo

F
/

/

/

/

/

4
1
1
7
4
1
8
6
6
8
2
2
norte
mi
norte
_
a
_
0
0
1
1
8
pag
d

.

t

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Cifra 6. The network can reliably learn from noisy stimuli with and without a readaptation phase. (A) Despite the presence of noise ΔS
aprender
during learning, the network can learn the organization of stimuli and, after encoding, classify stimuli of even higher noise levels ΔStest.
learn decrease the performance. Color code depicts ΔC, green line marks ΔC = ΔStest. (B) If the learning phase
Sin embargo, higher levels of ΔS
is followed by a readaptation phase using only intrinsic plasticity and the level of noise ΔStest with which the system is tested, the overall
classification performance increases drastically. Ahora, stimuli with a noise level of up to ΔStest ≈ 0.8 can be classified. (C) The readaptation
phase leads to a large performance gain for medium and high noise levels ΔStest. Color code depicts the difference between the network
without and with a readaptation phase. Red area represents a benefit by using the readaptation phase. (C.A) Orange dashed line: identity line
ΔS

= ΔStest

aprender

Neurociencia en red

187

Learning of noisy stimuli requires distinct phases of plasticity

different clusters). Sin embargo, if there is little or even no noise during learning, the network can
subsequently not only classify stimuli of that same level of noise, but also classify significantly
noisier stimuli (white area above orange dashed identity line). This result indicates that the
network does not adapt specifically to only the noise level ΔS
learn it is learning from, but that
the network generalizes across a broad variety of different noise levels ΔStest. Por ejemplo,
=
although the network may learn from stimulus patterns with an average noise level of ΔS
0.1, it can reliably classify stimuli of noise levels ΔStest from 0 to about 0.6 afterwards.

aprender

Además, the performance of a network that was successfully trained in a noisy envi-
ronment can be drastically improved by a subsequent readaptation phase. Using this second

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

t

/

/

mi
d
tu
norte
mi
norte
a
r
t
i
C
mi

pag
d

yo

F
/

/

/

/

/

4
1
1
7
4
1
8
6
6
8
2
2
norte
mi
norte
_
a
_
0
0
1
1
8
pag
d

.

t

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

¯S ν

Cifra 7. Schematic summary of results. Noisy patterns S ν
are repeatedly generated from original
(p.ej., a triangle, a circle, and a cross) and imprinted on the stimulus layer (encoding
estímulos
phase). If the noise ΔS
learn is sufficiently small, synaptic and intrinsic plasticity lead to the formation
of structure encoding the organization of stimuli (existence of different geometrical forms). Después
this initial learning phase, a second learning or readaptation phase enables the network to classify
stimuli even in the presence of very high levels of noise ΔStest. Aquí, only intrinsic plasticity should
be present ( ˙w = 0; ˙ε (cid:7)= 0). This suggests that learning is carried out in two phases: en el primero
phase, the encoding phase, synaptic weights develop to represent the basic organization of the
environmental stimuli. This structuring of synaptic weights is most efficient if the noise ΔS
learn is
bajo. In the second phase, the readaptation phase, learning is dominated by intrinsic plasticity while
synaptic weights have to be maintained. The cortical firing thresholds are then able to quickly adapt
to the current level of noise ΔStest. Thereby, intrinsic plasticity approximates the optimal thresholds
for a given value of ΔStest maximizing performance.

Neurociencia en red

188

Learning of noisy stimuli requires distinct phases of plasticity

phase in order to (re)adapt the neuronal excitabilities to the level of noise ΔStest that will sub-
sequently be tested for enables the network to classify stimuli up to even higher noise levels of
ΔStest ≈ 0.8 (Figure 6B). Como consecuencia, the readaptation phase provides a significant advantage
for a large regime of stimulus cluster sizes (red area in Figure 6C). Even more so, stimulus clus-
ters with sizes ΔStest ∈ (0.6, 0.8) can only be classified by using the readaptation phase. El
∈ (0.2, 0.3) and ΔStest ∈ (0.8, 1.0)
decrease in performance for noise levels between ΔS
(blue area) is not crucial given the low level of performance (Figure 6A).

aprender

En resumen, sparse, expansive networks can learn the clustered organization of noisy stimuli
(underlying stimuli might be triangle, circle, and cross like in Figure 7) by the interplay of
synaptic and intrinsic plasticity in a self-organized manner. During the initial encoding phase,
low levels of noise ΔS
learn can be tolerated by the system, while higher levels of noise obstruct
the network’s ability to learn the organization of stimuli. After the encoding phase, the network
can reliably classify noisy patterns of up to ΔStest ≈ 0.6 if synaptic weights and neuronal firing
thresholds are fixed ( ˙w = 0; ˙ε = 0). Por otro lado, the performance decreases significantly
if both synaptic and intrinsic plasticity are allowed to modify the network’s structure during
the presentation of these noisy stimuli ( ˙w (cid:7)= 0; ˙ε (cid:7)= 0). Curiosamente, if the synaptic structure is
maintained while the excitability of the cortical neurons can adapt ( ˙w = 0; ˙ε (cid:7)= 0), the network
can successfully classify stimuli even in the presence of very high levels of noise (ver figura 7
bottom for examples). These results suggest that learning in the presence of noise requires
two distinct phases of plasticity: initial learning of the organization of environmental stimuli
via synaptic and intrinsic plasticity in the encoding phase followed by the readaptation phase
using only intrinsic plasticity in order to readapt to the current level of noise.

DISCUSIÓN

How do neuronal systems learn the underlying organization of the surrounding environment
in realistic, noisy conditions? en este estudio, we have shown that sparse and expansive networks
can reliably form the required neuronal and synaptic structures via the interplay of synaptic and
intrinsic plasticity. Among others, our results indicate that after learning the classification of
diverse environmental stimuli in the presence of high levels of noise works best if the synaptic
structure is more rigid than the neuronal structure, namely the excitabilities of the neurons.
Thereby, our model predicts that higher levels of noise lead to lower firing thresholds or (en
promedio) increased neuronal excitabilities (Figura 4A).

Además, our model predicts that classification performance is highest if the system is
adapted to the perceived level of noise. We propose the following psychophysical experiment
related to pattern recognition in order to test this prediction: Primero, subjects have to learn a set of
previously unknown patterns, such as visual or auditory patterns. Segundo, they have to identify
noisy versions of these patterns. We propose that the classification performance of a given noisy
pattern depends on the history of patterns the subject perceived beforehand. Específicamente, nuestro
model predicts that a given noisy pattern is classified most reliably if the previously perceived
patterns had the same level of noise. By transferring this protocol to an animal model, el
predicted course of the adaptation of the firing thresholds could be verified, también.

After the successful learning of the inherent organization of stimuli, in this study we changed
the synaptic variability by “turning off” the dynamics of synaptic plasticity (Cifra 4). Este
change of the timescale of synaptic plasticity between the encoding and the readaptation phase
could be related to the dynamics during the development of the visual system (Grajilla, 2003;
Grajilla, Fox, Sato, & Czepita, 1992; Hensch, 2004; Hooks & Chen, 2007). During the critical
período, the early visual system is quite susceptible to new sensory experiences and the system

Neurociencia en red

189

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

/

/

t

mi
d
tu
norte
mi
norte
a
r
t
i
C
mi

pag
d

yo

F
/

/

/

/

/

4
1
1
7
4
1
8
6
6
8
2
2
norte
mi
norte
_
a
_
0
0
1
1
8
pag
d

t

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Learning of noisy stimuli requires distinct phases of plasticity

is very plastic. Además, the visual range during early developmental phases is limited, cual
could imply lower levels of noise. De este modo, the encoding phase in our model could be linked to the
critical period. Por el contrario, the matured visual system is quite rigid, matching the requirements
of the readaptation phase, which predicts that the sensory system should be able to adapt to
different levels of noise by (solo) changing the neuronal excitabilities (Cifra 6).

One of the major assumptions of this work, similar to a previous study (Babadi & Sompolinsky,
2014), is that environmental stimuli are organized such that they can be grouped into clus-
ters. Each of these clusters has the same Gaussian noise level ΔS. Natural stimuli, sin embargo,
have much more structured noise statistics. Sin embargo, the mechanisms considered here
that enable the network to compensate for noisy stimuli (es decir., synaptic and intrinsic plastic-
idad) do not specifically rely on the noise being Gaussian. Intrinsic plasticity will still main-
tain the target firing rate independent of precisely how the membrane potential distributions
(Figura 3A, B) are shaped by different types of noise. Given our results (Cifra 4), we expect that
the neuronal thresholds resulting in the target firing rate will be close to the optimal threshold.
Además, the exponential synaptic decay may lead to less reliable presynaptic stimulus
neurons having a smaller impact on a cortical neuron’s firing. In addition to clusters not being
Gaussian shaped, in a natural environment each underlying stimulus may also have a different
overall level of noise such that ΔSν
depends on the cluster ν. Sin embargo, if the synaptic structure
has already been learned during the encoding phase, we expect that cluster-specific ΔSν
test do
not have an impact on the classification performance, as each cortical neuron becomes selec-
tive to only one stimulus cluster (Figura 2D). Además, only the noise level of this selected
cluster defines the optimal firing threshold (Cifra 3). Por lo tanto, the firing threshold of each
neuron can be tuned to its distinct, optimal threshold value, which is independent of the noise
levels of other clusters. Por otro lado, we expect that different ΔSν
learn during the en-
coding phase will lead to over- and underrepresentations of stimulus clusters in the network.
Since noise attenuates competition between clusters (Figure 5C, D), clusters with high ΔSν
aprender
are less competitive and will subsequently be underrepresented. Sin embargo, the underrep-
resentation could be an advantage, as stimuli that are too noisy are less informative about the
environment than others; como consecuencia, the neuronal system attributes a smaller amount of
resources (neurons and synapses) to them. Sin embargo, the effect of cluster-specific noise on the
neuronal and synaptic dynamics have to be investigated further.

Además, some stimulus clusters might be perceived more often than others. The cor-
responding representations would become larger than average, since their relevant synapses
are strengthened more often by Hebbian synaptic plasticity, leading to a competitive advan-
llevar. Larger representations of more frequently perceived stimulus clusters might provide a
behavioral advantage, as these clusters also need to be classified more often. Sin embargo, el
discrepancy between the frequency of such a cluster and the target firing rate of a cortical neu-
ron responding to it might pose a problem. As intrinsic plasticity tries to maintain the target
actividad, the firing threshold would be placed so high that even slight noise could not be toler-
ated. One solution might be that neurons could have different target activities (GRAMO. GRAMO. Turrigiano,
2008) and clusters are selected such that target activity and presentation frequency match. A
different mechanism could be global inhibition. A single inhibitory neuron or population of
neurons connected to all relevant cortical neurons could homeostatically regulate the activity
of the cortical layer by providing inhibitory feedback. Such a mechanism has been identified,
por ejemplo, in the Drosophila mushroom body (Eichler et al., 2017; Faghihi, Kolodziejski,
Fiala, Wörgötter, & Tetzlaff, 2013).

en este estudio, only one combination of three different plasticity rules was investigated. De
curso, many more plasticity mechanisms are conceivable and have been widely studied

Neurociencia en red

190

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

t

/

/

mi
d
tu
norte
mi
norte
a
r
t
i
C
mi

pag
d

yo

F
/

/

/

/

/

4
1
1
7
4
1
8
6
6
8
2
2
norte
mi
norte
_
a
_
0
0
1
1
8
pag
d

t

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Learning of noisy stimuli requires distinct phases of plasticity

(Dayán & Abbott, 2001; Miner & Triesch, 2016; Tetzlaff, Kolodziejski, Markelic, & Wörgötter,
2012; Zenke, Agnes, & Gerstner, 2015). One mechanism could be synaptic scaling regulating
the synaptic weights instead of the neuronal excitability such that the neurons reach a certain
target firing rate (Desai, Cudmore, nelson, & Turrigiano, 2002; Hengen, Lambo, Van Hooser,
katz, & Turrigiano, 2013; Keck et al., 2013; Tetzlaff et al., 2011; GRAMO. GRAMO. Turrigiano et al., 1998).
Sin embargo, the timescale of synaptic scaling is significantly slower than the timescale of in-
trinsic plasticity, which could increase the duration of the readaptation phase required by the
neuronal system to adapt to new levels of noise. Por otro lado, faster homeostatic mech-
anisms (Zenke & Gerstner, 2017) could result in a shorter duration of readaptation. Sin embargo,
the influence of further plasticity mechanisms on the dynamics of sparse, expansive networks
has to be analyzed in future studies.

It is usually assumed that homeostatic synaptic plasticity is required for competition
(Abbott & nelson, 2000; Molinero, 1996). En el presente estudio, sin embargo, competition arises from
the interactions of Hebbian synaptic plasticity and homeostatic intrinsic plasticity alone. Home-
ostatic intrinsic plasticity maintains a certain activity of a given cortical neuron. Stimuli com-
pete for this activity. If one stimulus gains an activity advantage, it will see synapses activated
by it strengthened. This leads to less strengthening of other synapses, because the occurrence
of Hebbian synaptic plasticity is limited by homeostatic intrinsic plasticity. Synapses will only
subsequently be weakened due to homeostatic synaptic plasticity (exponential decay term),
which does not interfere in the interaction between Hebbian synaptic and homeostatic intrin-
sic plasticity generating competition (see Supplementary Figure 2). Como consecuencia, the widely
held opinion that homeostatic synaptic plasticity is required for competition might have to be
revisado.

Even though expansion is a common feature of sensory processing networks, it is not a
prerequisite for the results presented here. Nonexpansive networks, también, can learn to distin-
guish different clusters, although they do not reach the performance of an expansive network
(see Supplementary Figure 4). This means that nonexpansive networks as well profit from a
two-phase learning protocol as suggested here.

En general, this study suggests the following answer to how networks learn to classify stimuli in
noisy environments: Learning takes place in two distinct phases. The first phase is the encoding
phase. Hebbian synaptic and homeostatic intrinsic plasticity structure synaptic weights so as
to represent the organization of stimuli, with each neuron becoming selectively responsive to
a single stimulus cluster. Optimal synaptic structure is achieved if stimuli are noise-free. El
second learning phase, called readaptation phase, ensues in an arbitrarily noisy environment.
Aquí, synaptic weights have to be maintained in order to preserve the previously learned
synaptic structure. Mientras tanto, homeostatic intrinsic plasticity regulates the activity of neurons.
The firing thresholds are thereby adapted to their optimal values, maximizing classification
performance in the current environment (Cifra 7).

MÉTODOS

Network and Plasticity Mechanisms

en este estudio, a two-layered feed-forward network of rate-based neurons is investigated
(Figura 1A). The first layer, called stimulus layer, consists of NS = 1, 000 neuronas, mientras que la
second layer, called cortical layer, consists of NC = 10, 000 neuronas. Feed-forward synaptic
connections exist from all stimulus to all cortical neurons. Their synaptic strengths are given

Neurociencia en red

191

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

/

t

/

mi
d
tu
norte
mi
norte
a
r
t
i
C
mi

pag
d

yo

F
/

/

/

/

/

4
1
1
7
4
1
8
6
6
8
2
2
norte
mi
norte
_
a
_
0
0
1
1
8
pag
d

t

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Learning of noisy stimuli requires distinct phases of plasticity

by ωji where j ∈ {1, …, CAROLINA DEL NORTE} denotes the postsynaptic cortical neuron and i ∈ {1, …, NS} el
presynaptic stimulus neuron. No recurrent connections are present.

The neurons of the stimulus layer will act as input. Tal como, the firing rate Si of stimulus
neuron i will be set to either 0 o 1. Each input therefore is a pattern of firing rates Si ∈ {0, 1}
on the stimulus layer. These firing rates elicit membrane potentials in the cortical neurons,
which follow the leaky integrator equation ˙uj = −uj + ∑NS
i=1 ωjiSi. We assume that each
input pattern is presented long enough such that the membrane potential mostly resides in the
fixed point for the current input. In order to save computation time, we therefore discard the
leaky integrator dynamics and simplify the membrane potential to the fixed point of the leaky
integrator equation:

uj =

NS

yo=1

ωjiSi .

(9)

The membrane potential uj will then be translated into a firing rate Cj of cortical neuron j

via the sigmoidal transfer function

Cj =

F max
1 + exp.(b(ε j − uj))

,

(10)

resulting in cortical firing rates between 0 and F max. The steepness of the sigmoidal function
is given by β = 5, the maximum firing rate F max = 1, and the point of inflection ε j is specific
to each cortical neuron j. ε j corresponds to a neuron-specific firing threshold determining the
neuronal excitability.

Intrinsic plasticity regulates this neuron-specific firing threshold ε j. In order for each cortical
neuron j to reach a target firing rate F T = 0.001, the point of inflection of the sigmoidal transfer
curve follows the dynamics

˙ε j = κ(Cj − F T) .

(11)

The parameter κ = 1 · 10−2

firing rate Cj of cortical neuron j is larger than the target firing rate F T
such that Cj decreases (assuming the input stays constant), y viceversa.

determines the adaptation speed of intrinsic plasticity. If the
, the threshold ε j increases

The feed-forward synaptic connections ωji between the postsynaptic cortical neuron j and

the presynaptic stimulus neuron i are controlled by unsupervised synaptic plasticity:

˙ωji = μSiCj − ηωji .

(12)

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

/

t

/

mi
d
tu
norte
mi
norte
a
r
t
i
C
mi

pag
d

yo

F
/

/

/

/

/

4
1
1
7
4
1
8
6
6
8
2
2
norte
mi
norte
_
a
_
0
0
1
1
8
pag
d

.

t

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

and η = 3 · 10−8

The parameters μ = 1 · 10−5

determine the speed of the Hebbian correla-
tion learning term and the exponential decay of synaptic weights, respectivamente. We assume that
synaptic plasticity acts much slower than the presentation time of a single input pattern such
that the fixed point of the leaky integrator given by Equation 9 does not significantly change
during the presentation of a single input and the simplification thus still holds.

Clustered Stimuli

The structuring of the inputs and the analysis methods are similar to a previous work (Babadi
& Sompolinsky, 2014). Aquí, sensory stimuli are grouped in P = 1, 000 grupos. Each cluster

Neurociencia en red

192

Learning of noisy stimuli requires distinct phases of plasticity

comprises different sensory impressions of the same environmental stimulus. Its main compo-
nent is a characteristic neuronal firing pattern, called the central stimulus pattern
, dónde
ν ∈ {1, …, PAG} denotes the cluster (Figura 1A, B). All central patterns are generated by assigning
each stimulus neuron i for each stimulus cluster ν a firing rate
i of either 0 o 1 with equal
probabilidad. In addition to the central pattern, each cluster also contains noisy variants of the
central pattern, called noisy patterns S ν
. Noisy stimulus patterns are generated by randomly
changing the central stimulus pattern’s firing rates from 1 a 0 or vice versa with probability
ΔS/2. ΔS thereby determines the level of noise and consequently the size of the stimulus
grupos, and can range from 0 (no noise) a 1 (no correlation remains). Además, it is the
normalized average Hamming distance of noisy stimulus patterns to their central stimulus pat-
tern:

¯S ν

¯S ν

(cid:2)

ΔS =

∑NS

i − ¯S ν
i |

yo=1 |S ν
NS · 1/2

(cid:3)

,

S ν,norte

(13)

with the angular brackets denoting the average over all noisy stimulus patterns S ν

de todo

clusters ν.

All central and noisy stimulus patterns elicit central and noisy cortical patterns

and C ν
,
respectivamente, in the cortical layer of the network. In analogy to Equation (13) el (uncorrected)
size of the resulting cortical clusters can be defined as Δc via

¯C ν

Δc =

(cid:2)

(cid:3)

∑NC
j − ¯C ν
j=1 |C ν
j |
NC · Z(C ν, ¯C ν)

.

C ν,norte

(14)

As the firing rates C ν

j and

¯C ν
j can take on values between 0 y 1, a more complex normal-

ization Z(, C λ) for the patterns Cκ

and C λ

is required:

z(, C λ) =

1
CAROLINA DEL NORTE

2

CAROLINA DEL NORTE

l=1

CAROLINA DEL NORTE

m=1

|

l − C λ

metro| .

(15)

This normalization quantifies the average overlap two random cortical patterns with the

same firing rates would have.

Being generated randomly, the central stimulus patterns are uncorrelated among each other.
Because of the propagation of these patterns through the synaptic connections, sin embargo, el
central cortical patterns might not be uncorrelated. In the context of noise reduction a more
appropriate performance measure compensates for the introduced correlation. The cortical
cluster size ΔC is therefore defined as

ΔC =

Δc
dC

,

(16)

where the cortical cluster distance dC is a measure of the correlation between central cor-

tical patterns:

dC =

(cid:2)

∑NC
j − ¯C λ
j=1 | ¯Cκ
j |
NC · Z( ¯Cκ, ¯C λ)

(cid:3)

.

κ,λ

(17)

193

Neurociencia en red

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

/

/

t

mi
d
tu
norte
mi
norte
a
r
t
i
C
mi

pag
d

yo

F
/

/

/

/

/

4
1
1
7
4
1
8
6
6
8
2
2
norte
mi
norte
_
a
_
0
0
1
1
8
pag
d

.

t

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Learning of noisy stimuli requires distinct phases of plasticity

Classification Errors
En el siguiente, the classification errors e
fn,j (false negatives) of single
neurons in a trained network will be set into relation with the cortical cluster size ΔC. para hacerlo,
we will first discuss the cortical cluster distance dC and the ergodicity of the network. Lo haremos
then use the results to derive the relation between the cortical cluster size and the classification
errores.

fp,j (false positives) and e

In order to simplify the following derivations for ΔC = Δc

dC , we consider that the cortical
cluster distance dC = 1 in a trained network as discussed in the following. In the ‘Plastic Net-
work’ section (ver, p.ej., Cifra 2) we have demonstrated that a given cortical neuron becomes
responsive to a single stimulus cluster during training. The stimulus cluster that a neuron be-
comes responsive to is usually the one that initially elicits the strongest membrane potential,
as the related synapses will experience the greatest strengthening by Hebbian plasticity. Este
cluster is a random one, aunque, as the initial membrane potential depends only on the initially
random synaptic weights. Como consecuencia, each cortical neuron becomes responsive to a ran-
dom stimulus cluster. This implies that the cortical clusters are uncorrelated since each cortical
neuron’s response to a given cluster is random. By definition, the cortical cluster distance dC
is thus equal to 1. We therefore have

ΔC = Δc =

(cid:2)

(cid:3)

∑NC
j − ¯C ν
j=1 |C ν
j |
NC · Z(C ν, ¯C ν)

.

C ν,norte

(18)

Próximo, we will assume that a trained network is ergodic, eso es, we can exchange averages
over cortical patterns (“time”) with averages over cortical neurons (“space”). Específicamente, nosotros
assume the following relation to hold:

(cid:2)

j=1 |C ν

∑NC
j − ¯C ν
j |
NC · Z(C ν, ¯C ν)

(cid:3)

(cid:2)

=

C ν,norte

(cid:3)

j − ¯C ν
∑P
ν=1 |C ν
j |
P · Z(Cj, ¯Cj)

Cj,j

con

z(C ν, ¯C ν) =

z(Cj, ¯Cj) =

1
CAROLINA DEL NORTE
1
P2

2

CAROLINA DEL NORTE

l=1
PAG

κ=1

CAROLINA DEL NORTE

m=1
PAG

λ=1

|C ν

l − ¯C ν

metro| ,

|

j − ¯C λ

j | .

(19)

(20)

(21)

¯Cj are vectors containing the firing rates of cortical neuron j in response to one noisy/

Cj and
central pattern of each cluster.

En el siguiente, we will divide the assumption about the ergodicity of the network in several
smaller assumptions and discuss whether they are valid. Primero, we assume a large system, eso es,
PAG, NC → ∞, which approximates the network studied here consisting of P = 1, 000 and NC =
10, 000 adequately. Segundo, we need to assume that the set of cortical clusters is homogeneous,
eso es, ΔC is the same for all clusters. This is a sufficient approximation as ΔS is the same
for all clusters and no cluster is preferred by the cortical neurons in a trained network as
discussed before. Given these two assumptions, we can drop the average over noisy patterns
C ν
en la ecuación 19, because the average over a single noisy pattern of an infinite number of
clusters is equal to the average over all noisy patterns of an infinite number of clusters as long
as the clusters are homogeneous. Asimismo, by assuming that the set of cortical neurons is

Neurociencia en red

194

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

/

/

t

mi
d
tu
norte
mi
norte
a
r
t
i
C
mi

pag
d

yo

F
/

/

/

/

/

4
1
1
7
4
1
8
6
6
8
2
2
norte
mi
norte
_
a
_
0
0
1
1
8
pag
d

t

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Learning of noisy stimuli requires distinct phases of plasticity

homogeneous, we can drop the average over the sets of noisy firing rates Cj in Equation 19. En
a trained network, where all neurons developed a bimodal weight distribution and have the
same target firing rate, this is a decent approximation. We can thus write the following:

(cid:2)

(cid:2)

PAG,NC→∞⇐⇒

j=1 |C ν

∑NC
j − ¯C ν
j |
NC · Z(C ν, ¯C ν)

j=1 |C ν

∑NC
j − ¯C ν
j |
NC · Z(C ν, ¯C ν)

(cid:3)

C ν,norte

(cid:3)

norte
∑NC
j − ¯C ν
j=1 |C ν
j |
l=1 ∑NC
∑NC
− ¯C ν
m=1 |C ν
metro|
yo
j − ¯C ν
|C ν
j |
l=1 ∑NC
m=1 |C ν

l − ¯C ν
metro|

1
PAG

PAG

ν=1

1
CAROLINA DEL NORTE

PAG

ν=1

CAROLINA DEL NORTE

j=1

1
CAROLINA DEL NORTE
(cid:2)

2

1
CAROLINA DEL NORTE

CAROLINA DEL NORTE

l=1
CAROLINA DEL NORTE

m=1

1
CAROLINA DEL NORTE

2 ∑NC
CAROLINA DEL NORTE

m=1

|C ν

l − ¯C ν
metro|
(cid:3)

|C ν

l − ¯C ν
metro|

yo

=

=

=

=

=

=

(cid:3)

Cj,j

(cid:3)

(cid:2)

(cid:2)

j − ¯C ν
∑P
ν=1 |C ν
j |
P · Z(Cj, ¯Cj)

j − ¯C ν
∑P
ν=1 |C ν
j |
P · Z(Cj, ¯Cj)

j
ν=1 |C ν

∑P
λ=1 ∑P
|C ν
κ=1 ∑P

j − ¯C ν
j |
j − ¯Cκ
κ=1 |C λ
j |
j − ¯C ν
j |
j − ¯C λ
λ=1 |
j |

P ∑P
1

1

P2 ∑P

1
CAROLINA DEL NORTE

CAROLINA DEL NORTE

j=1

PAG

ν=1

CAROLINA DEL NORTE

j=1

PAG

κ=1

PAG

λ=1

1
P2
(cid:2)

|

j − ¯C λ
j | ∀j, norte
(cid:3)

1
PAG

PAG

λ=1

|

j − ¯C λ
j |

∀j, norte .

κ

(22)

(23)

(24)

(25)

(26)

(27)

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

t

/

/

mi
d
tu
norte
mi
norte
a
r
t
i
C
mi

pag
d

yo

F
/

/

/

/

/

4
1
1
7
4
1
8
6
6
8
2
2
norte
mi
norte
_
a
_
0
0
1
1
8
pag
d

.

t

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Ecuación 26 is a sufficient, but not a necessary, condition for Equation 25. Por lo tanto, if we
can show that Equation 27 is a valid assumption, this suffices (together with the assumptions
mentioned above) for the ergodicity of a trained network.

As demonstrated in the Results section and discussed before, every cortical neuron of the
trained network is responsive to a single, random cluster. We need to further assume that every
central pattern elicits activity in F T NC = 10 of all cortical neurons. This is strictly true only
on average, but if cortical neurons respond to a random cluster, it is a decent approximation.
Como consecuencia, we can do the following transformations:

(cid:2)

(cid:2)

1
CAROLINA DEL NORTE

CAROLINA DEL NORTE

m=1

|C ν

l − ¯C ν
metro|

(cid:3)

yo

10
CAROLINA DEL NORTE

|C ν

l − 1| +

NC − 10
CAROLINA DEL NORTE

|C ν

l − 0|

(cid:3)

yo

(cid:5)C ν

yo (cid:6)yo

(cid:2)

(cid:2)

1
PAG

PAG

λ=1

|

j − ¯C λ
j |

(cid:3)

κ

1
PAG

|

j − 1| +

P − 1
PAG

|

j − 0|

(cid:3)

κ

(cid:5)

(cid:4)


j

κ

=

=

=

∀j, norte

(28)

∀j, norte

(29)

∀j, norte .

(30)

Eso es, for each ΔS every cortical pattern ν and every cortical neuron j must have the same
average firing rate. This is true given the assumptions we have already discussed: The cortical
neurons are homogeneous, eso es, they all have a bimodal weight distribution and so forth,
and each cortical neuron is responsive to a random cluster.

Neurociencia en red

195

Learning of noisy stimuli requires distinct phases of plasticity

In total, we have divided the assumption of ergodicity (Ecuación 19) of a trained network in

simpler assumptions that we were able to validate. Using the ergodicity we now have

ΔC =

(cid:2)

j − ¯C ν
∑P
ν=1 |C ν
j |
P · Z(Cj, ¯Cj)

(cid:3)

.

Cj,j

(31)

Similar to the argument made for Equation 23, in an infinitely large network, averaging
over an infinite set of noisy firing rates Cj of a single cortical neuron j is equal to averaging
over an infinite set of noisy firing rates Cj of all cortical neurons as long as the neurons are
homogeneous. We can thus drop the average over j:

(cid:2)

ΔC =

(cid:3)

j − ¯C ν
∑P
ν=1 |C ν
j |
P · Z(Cj, ¯Cj)

(cid:2)

PAG,NC→∞
−−−−−→

(cid:3)

Cj,j
j − ¯C ν
∑P
ν=1 |C ν
j |
P · Z(Cj, ¯Cj)
j − ¯C ν

|C ν

Cj
j | + ∑
norte| ¯C ν

j =0

=

=

=

(cid:2)


norte| ¯C ν
j =1
(cid:6)

P ∑P
1

κ=1

1 · |

|C ν

(cid:2) ∑
norte| ¯C ν

j =1

|C ν

j − ¯C ν
j |

(cid:3)

(cid:7)

Cj

j − 1| + (P − 1) · |
|C ν

j − ¯C ν
j |

(cid:3)

j − 0|

j − ¯C ν

j | + ∑
norte| ¯C ν
j =0
P ∑P
κ=1 Cκ
j
j − ¯C ν
j |

1 + P−2

(cid:3)

|C ν

(cid:2) ∑
norte| ¯C ν
j =1
1 + P−2

(cid:8)

κ=1 Cκ
j

P ∑P
(cid:9)(cid:10)
false negatives e

fn,j

Cj
j − ¯C ν
j |

|C ν

(cid:2) ∑
norte| ¯C ν
j =0
1 + P−2

+

Cj
(cid:11)

(cid:8)

κ=1 Cκ
j

P ∑P
(cid:9)(cid:10)
false positives e

fp,j

(cid:3)

.

Cj
(cid:11)

(32)

(33)

(34)

(35)

(36)

i − 1

ν=1( ¯S ν

Initialization
When initialized randomly, the synaptic weights ωji are drawn from a Gaussian distribution
2√
with mean 0 and variance
. The synaptic weights can also be initialized in a structured
NS
∑P
manner according to ωji = 100
j − F T) (similar to Babadi & Sompolinsky,
2 )(
NS
2014; Tsodyks & Feigelman, 1988). The factor 100 scales the synaptic weights such that they
are in the same order of magnitude as the synaptic weights in the trained dynamic network
ensuring that they are comparable.
are cortical patterns that are generated using one of the
following methods: For Figure 1
are random patterns of ones and zeros where each pattern
ν and each cortical neuron j has an activity of F T
are computed via

j − Tj), where Θ denotes the Heaviside function and the thresholds Tj are chosen
such that each cortical neuron j achieves an activity of F T
. This results in cortical patterns Rν
that are correlated to the central cortical patterns

. For all other results,

of an existing network.

j = Θ( ¯C ν

¯C ν

The cortical membrane thresholds ε j are then initialized such that each cortical neuron j
achieves an average firing rate of the target firing rate F T
at the central cortical patterns. En
order to find the corresponding membrane thresholds ε j, the secant method is used with initial
values of 0 and the mean of the highest and second highest (as F T P = 1) membrane potentials

Neurociencia en red

196

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

t

/

/

mi
d
tu
norte
mi
norte
a
r
t
i
C
mi

pag
d

yo

F
/

/

/

/

/

4
1
1
7
4
1
8
6
6
8
2
2
norte
mi
norte
_
a
_
0
0
1
1
8
pag
d

.

t

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Learning of noisy stimuli requires distinct phases of plasticity

of cortical neuron j. If structured synaptic weights are used, this leads to ε j close to the mean
of the highest and second highest membrane potentials of neuron j.

Implementation
Training is done by repeatedly presenting stimulus patterns for one time step Δt = 1 cada. El
high computational demand of simulating a network with 11, 000 neurons for 200, 000 training
steps each containing 1, 000 patterns for just a single parameter set made parallel simulation
of patterns necessary and required the usage of a computer cluster. Para tal fin, each training
step consists of parallel simulation of one stimulus pattern per cluster. Stimulus patterns are
generated using a stimulus cluster size ΔS
learn and the current learning step is denoted by L.
The synaptic weights ωji and cortical thresholds ε j are updated at the end of each training
step according to Δωji = ∑P
denotes the cortical
pattern of cluster ν that was simulated in this learning step. This is a reasonable approximation
if a single learning step changes the network’s state only slightly, as is the case in this study.

ν=1 ˙ωji(C ν) and Δε j = ∑P

ν=1 ˙ε j(C ν), where C ν

The cortical cluster size ΔC (cf. Ecuación 14 and Equation 16) in response to a stimulus

cluster size ΔStest is approximated using 10 noisy patterns per cluster.

If intrinsic plasticity is active during the testing phase, repeatedly, one noisy pattern per
cluster is simulated using a stimulus cluster size ΔStest. After the mean of all cortical thresholds
changed by less than 0.0001%, the cortical cluster size ΔC is calculated for the given stimulus
¯C ν
cluster size ΔStest. In order to speed up its computation, we used the central cortical patterns
and cortical cluster distance dC from ahead of the adaptation phase and were able to verify that
this does not influence the results. The thresholds are reset to their previous values afterwards.
The entire procedure is performed for all ΔStest, each requiring fewer than 7, 000 learning steps
for the thresholds to converge.

SUPPORTING INFORMATION

Supporting information for this article is available at https://doi.org/10.1162/netn_a_00118.

CONTRIBUCIONES DE AUTOR

Steffen Krüppel: Conceptualización; Curación de datos; Análisis formal; Investigación; Methodol-
ogia; Software; Validación; Visualización; WritingOriginal Draft; Writing – Revisar & Editing.
Christian Tetzlaff: Conceptualización; Adquisición de financiación; Investigación; Metodología; Proyecto
administración; Recursos; Supervisión; WritingOriginal Draft; Writing – Revisar & Editing.

INFORMACIÓN DE FINANCIACIÓN

Christian Tetzlaff, Horizon 2020 Framework Programme (http://dx.doi.org/10.13039/
100010661), Award ID: 732266. Christian Tetzlaff, Deutsche Forschungsgemeinschaft (http://
dx.doi.org/10.13039/501100001659), Award ID: CRC 1286 (project C1).

REFERENCIAS

Abbott, l. F., & nelson, S. B. (2000). Synaptic plasticity: Taming the
beast. Neurociencia de la naturaleza, 3(11s), 1178–1183. https://doi.org/
10.1038/81453

Albus, j. S. (1971). A theory of cerebellar function. Mathemat-
ical Biosciences, 10(1), 25–61. https://doi.org/10.1016/0025-
5564(71)90051-4

Babadi, B., & Sompolinsky, h. (2014). Sparseness and expansion in
sensory representations. Neurona, 83(5), 1213–1226. https://doi.
org/10.1016/j.neuron.2014.07.035

Balling, A., Technau, GRAMO. METRO., & Heisenberg, METRO. (1987). Are
the structural changes in adult Drosophila mushroom bod-
ies memory traces? Studies on biochemical learning mutants.

Neurociencia en red

197

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

/

/

t

mi
d
tu
norte
mi
norte
a
r
t
i
C
mi

pag
d

yo

F
/

/

/

/

/

4
1
1
7
4
1
8
6
6
8
2
2
norte
mi
norte
_
a
_
0
0
1
1
8
pag
d

.

t

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Learning of noisy stimuli requires distinct phases of plasticity

Journal of Neurogenetics, 4(1), 65–73. https://doi.org/10.3109/
01677068709102334

Benda, J., & Hertz, A. V. METRO. (2003). A universal model for spike-
frequency adaptation. Computación neuronal, 15(11), 2523–2564.
https://doi.org/10.1162/089976603322385063

Bi, G.-q., & Poo, M.-m. (1998). Synaptic modifications in cultured
hippocampal neurons: Dependence on spike timing, sináptico
strength, and postsynaptic cell type. Revista de neurociencia,
18(24), 10464–10472. https://doi.org/10.1523/JNEUROSCI.18-
24-10464.1998

Bliss, t. V. PAG., & Lømo, t. (1973). Long-lasting potentiation of synap-
tic transmission in the dentate area of the anaesthetized rabbit
following stimulation of the perforant path. Revista de fisiología,
232(2), 331–356. Retrieved from https://www.ncbi.nlm.nih.gov/
pmc/articles/PMC1350458/

Brecht, METRO., & Sakmann, B. (2002). Dynamic representation of
whisker deflection by synaptic potentials in spiny stellate and
pyramidal cells in the barrels and septa of layer 4 rat somatosen-
sory cortex. Revista de fisiología, 543(1), 49–70. https://doi.org/
10.1113/jphysiol.2002.018465

Chacron, METRO. J., Longtin, A., & Maler, l. (2011). Efficient compu-
tation via sparse coding in electrosensory neural networks. Canalla-
rent Opinion in Neurobiology, 21(5), 752–760. https://doi.org/
10.1016/j.conb.2011.05.016

Dan, y., Atick, j. J., & Reid, R. C. (1996). Efficient coding of natural
scenes in the lateral geniculate nucleus: Experimental test of a com-
putational theory. Revista de neurociencia, 16(10), 3351–3362.
https://doi.org/10.1523/JNEUROSCI.16-10-03351.1996

Grajilla, norte. W.. (2003). Critical periods in the visual system. In B.
Hopkins & S. PAG. Johnson (Editores.), Neurobiology of infant vision.
Westport, CT: Preger.

Grajilla, norte. w., Fox, K., Sato, h., & Czepita, D. (1992). Critical
period for monocular deprivation in the cat visual cortex. Jour-
nal of Neurophysiology, 67, 197–202. https://doi.org/10.1152/
jn.1992.67.1.197

Dayán, PAG., & Abbott, l. F.

(2001). Theoretical neuroscience:
Computational and mathematical modeling of neural systems.
Cambridge, MAMÁ: CON prensa.

Desai, norte. S., Cudmore, R. h., nelson, S. B., & Turrigiano, GRAMO. GRAMO.
(2002). Critical periods for experience-dependent synaptic scal-
ing in visual cortex. Neurociencia de la naturaleza, 5(8), 783–789. https://
doi.org/10.1038/nn878

Desai, norte. S., Rutherford, l. C., & Turrigiano, GRAMO. GRAMO. (1999). Plasticity
in the intrinsic excitability of cortical pyramidal neurons. Naturaleza
Neurociencia, 2(6), 515–520. https://doi.org/10.1038/9165
Deweese, METRO. r., & Zador, A. METRO. (2003). Binary Coding in audi-
tory cortex. Advances in Neural Information Processing Systems,
117–124.

Eichler, K., li, F., Litwin-Kumar, A., Parque, y., Andrade, I., Schneider-
Mizell, C. METRO., . . . Cardona, A. (2017). The complete connectome
of a learning and memory centre in an insect brain. Naturaleza, 548,
175–182. https://doi.org/10.1038/nature23455

Faghihi, F., Kolodziejski, C., Fiala, A., Wörgötter, F., & Tetzlaff, C.
(2013). An information theoretic model of information process-
ing in the Drosophila olfactory system: The role of inhibitory
neurons for system efficiency. Frontiers in Computational Neu-
roscience, 7, 183.

Franks, k. METRO., & Isaacson, j. S. (2006). Strong single-fiber sen-
sory inputs to olfactory cortex: Implications for olfactory coding.
Neurona, 49(3), 357–363. https://doi.org/10.1016/j.neuron.2005.
12.026

Gerstner, w., & Kistler, W.. METRO. (2002). Mathematical formulations

of Hebbian learning. Cibernética biológica, 87, 404–415.

Hartman, C., Lázaro, A., Nessler, B., & Triesch, j. (2015). Where’s
the noise? Key features of spontaneous activity and neural vari-
ability arise through learning in a deterministic network. PLoS
Biología Computacional, 11, e1004640. https://doi.org/10.1371/
journal.pcbi.1004640

Hebb, D. oh. (1949). The organization of behavior. Nueva York, Nueva York:
wiley & Sons. Retrieved from http://s-f-walker.org.uk/pubsebooks/
pdfs/The_Organization_of_Behavior-Donald_O._Hebb.pdf

Hengen, k. B., Lambo, METRO. MI., Van Hooser, S. D., katz, D. B., &
Turrigiano, GRAMO. GRAMO. (2013). Firing rate homeostasis in visual cortex
of freely behaving rodents. Neurona, 80, 335–342.

Hensch, t. k. (2004). Critical period regulation. Annual Review
of Neuroscience, 27, 549–579. https://doi.org/10.1146/annurev.
neuro.27.070203.144327

Hooks, B. METRO., & Chen, C. (2007). Critical periods in the visual
sistema: Changing views for a model of experience-dependent
plasticity. Neurona, 56, 312–326. https://doi.org/10.1016/j.neuron.
2007.10.003

Jefferis, GRAMO. S. X. MI., Potter, C. J., chan, A. METRO., Marin, mi. C., Rohlfing,
T., Maurer, C. r., & luo, l. (2007). Comprehensive maps of
Drosophila higher olfactory centers: Spatially segregated fruit and
pheromone representation. Cell, 128(6), 1187–1203. https://doi.
org/10.1016/j.cell.2007.01.040

Keck, T., Keller, GRAMO. B., Jacobsen, R. I., Eysel, Ud.. T., Bonhoeffer, T., &
Hübener, METRO. (2013). Synaptic scaling and homeostatic plasticity
in the mouse visual cortex in vivo. Neurona, 80, 327–334.

Lázaro, A., Pipa, GRAMO., & Triesch, j. (2009). SORN: A self-organizing
recurrent neural network. Frontiers in Computational Neuro-
ciencia, 3, 23.

LeMasson, GRAMO., Marder, MI., & Abbott, l. F.

(1993). Activity-
dependent regulation of conductances in model neurons. Sci-
ence, 259(5103), 1915–1917. https://doi.org/10.1126/science.
8456317

Markram, h., Lübke, J., Frotscher, METRO., & Sakmann, B. (1997).
Regulation of synaptic efficacy by coincidence of postsynaptic
APs and EPSPs. Ciencia, 275(5297), 213–215. https://doi.org/10.
1126/science.275.5297.213

Marr, D. (1969). A theory of cerebellar cortex. Journal of Physi-
ology, 202(2), 437–470. https://doi.org/10.1113/jphysiol.1969.
sp008820

Martín, S. J., Grimwood, PAG. D., & morris, R. GRAMO. METRO. (2000). Synap-
tic plasticity and memory: An evaluation of the hypothesis. Un-
nual Review of Neuroscience, 23(1), 649–711. https://doi.org/10.
1146/annurev.neuro.23.1.649

Molinero, k. D. (1996). Synaptic economics: Competition and cooper-

ation in synaptic plasticity. Neurona, 17, 371–374.

Molinero, k. D., & MacKay, D. j. C. (1994). The role of constraints in
Hebbian learning. Computación neuronal, 6(1), 100–126. https://
doi.org/10.1162/neco.1994.6.1.100

Miner, D., & Triesch, j. (2016). Plasticity-driven self-organization
under topological constraints account for nonrandom features

Neurociencia en red

198

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

/

/

t

mi
d
tu
norte
mi
norte
a
r
t
i
C
mi

pag
d

yo

F
/

/

/

/

/

4
1
1
7
4
1
8
6
6
8
2
2
norte
mi
norte
_
a
_
0
0
1
1
8
pag
d

.

t

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Learning of noisy stimuli requires distinct phases of plasticity

of cortical synaptic wiring. Biología Computacional PLoS, 12(2),
e1004759.

Mombaerts, PAG., Wang, F., Dulac, C., chao, S. K., Nemes, A.,
Mendelsohn, METRO., . . . Axel, R. (1996). Visualizing an olfac-
tory sensory map. Cell, 87(4), 675–686. https://doi.org/10.1016/
S0092-8674(00)81387-2

Monk, T., Savin, C., & Lücke, j. (2016). Neurons equipped with in-
trinsic plasticity learn stimulus intensity statistics. In Advances in
sistemas de procesamiento de información neuronal (páginas. 4278–4286).

Monk, T., Savin, C., & Lücke, j. (2018). Optimal neural inference of
stimulus intensities. Scientific Reports, 8, 10038. https://doi.org/
10.1038/s41598-018-28184-5

Olshausen, B. A. (2003). Principles of image representation in vi-

sual cortex. The Visual Neurosciences, 2, 1603–1615.

Perez-Orive, J., Mazor, o., Tornero, GRAMO. C., Cassenaer, S., wilson, R. I.,
& Laurent, GRAMO. (2002). Oscillations and sparsening of odor repre-
sentations in the mushroom body. Ciencia, 297(5580), 359–365.
https://doi.org/10.1126/science.1070502

Poo, C., & Isaacson, j. S. (2009). Odor representations in olfac-
tory cortex: “Sparse” coding, global inhibition, and oscillations.
Neurona, 62(6), 850–861. https://doi.org/10.1016/j.neuron.2009.
05.022

Savin, C., Joshi, PAG., & Triesch, j. (2010). Independent component
analysis in spiking neurons. Biología Computacional PLoS, 6(4),
e1000757.

votante, METRO., & Koch, C. (1999). How voltage-dependent con-
ductances can adapt to maximize the information encoded by
neuronal firing rate. Neurociencia de la naturaleza, 2(6), 521–527. https://
doi.org/10.1038/9173

Stettler, D. D., & Axel, R. (2009). Representations of odor in the pir-
iform cortex. Neurona, 63(6), 854–864. https://doi.org/10.1016/j.
neuron.2009.09.005

Tetzlaff, C., Kolodziejski, C., Markelic, I., & Wörgötter, F. (2012).
aprendiendo, and plasticity. Biológico

Time scales of memory,
Cibernética, 106(11), 715–726.

Tetzlaff, C., Kolodziejski, C., Hora, METRO., & Wörgötter, F. (2011).
Synaptic scaling in combination with many generic plasticity
mechanisms stabilizes circuit connectivity. Frontiers in Compu-
tational Neuroscience, 5. https://doi.org/10.3389/fncom.2011.
00047

Triesch, j. (2007). Synergies between intrinsic and synaptic plastic-

ity mechanisms. Computación neuronal, 19, 885–909.

Triesch, J., Vo, A. D., & Hafner, A.-S. (2018). Competition for synap-
tic building blocks shapes synaptic plasticity. eVida, 7. https://doi.
org/10.7554/eLife.37836

Tsodyks, METRO. v., & Feigelman, METRO. V. (1988). The enhanced storage
capacity in neural networks with low activity level. EPL (Euro-
physics Letters), 6(2), 101. https://doi.org/10.1209/0295-5075/6/
2/002

Tornero, GRAMO. C., Bazhenov, METRO., & Laurent, GRAMO. (2008). Olfactory rep-
resentations by Drosophila mushroom body neurons. Diario
of Neurophysiology, 99(2), 734–746. https://doi.org/10.1152/jn.
01283.2007

Turrigiano, GRAMO., Abbott, l., & Marder, mi. (1994). Activity-dependent
changes in the intrinsic properties of cultured neurons. Ciencia,
264(5161), 974–977. https://doi.org/10.1126/science.8178157
Turrigiano, GRAMO. GRAMO. (2008). The self-tuning neuron: Synaptic scaling

of excitatory synapses. Cell, 135, 422–435.

Turrigiano, GRAMO. GRAMO., Leslie, k. r., Desai, norte. S., Rutherford, l. C.,
& nelson, S. B. (1998). Activity-dependent scaling of quantal
amplitude in neocortical neurons. Naturaleza, 391(6670), 892–896.
https://doi.org/10.1038/36103

Vincis, r., Gschwend, o., Bhaukaurally, K., Beroud, J., & Carleton,
A. (2012). Dense representation of natural odorants in the mouse
olfactory bulb. Neurociencia de la naturaleza, 15(4), 537–539. https://
doi.org/10.1038/nn.3057

Vinje, W.. MI., & Gallant, j. l. (2000). Sparse coding and decor-
relation in primary visual cortex during natural vision. Sci-
ence, 287(5456), 1273–1276. https://doi.org/10.1126/science.
287.5456.1273

Yger, PAG., & Gilson, METRO. (2015). Models of metaplasticity: Una revisión de
conceptos. Frontiers in Computational Neuroscience, 9, 138.
Zenke, F., Agnes, mi. J., & Gerstner, W.. (2015). Diverse synaptic plas-
ticity mechanisms orchestrated to form and retrieve memories in
spiking neural networks. Comunicaciones de la naturaleza, 6, 6922.
Zenke, F., & Gerstner, W.. (2017). Hebbian plasticity requires com-
pensatory processes on multiple timescales. Philosophical Trans-
actions of the Royal Society of London B: Ciencias Biologicas,
372. https://doi.org/10.1098/rstb.2016.0259

Zenke, F., Hennequin, GRAMO., & Gerstner, W.. (2013). Synaptic plastic-
ity in neural networks needs homeostasis with a fast rate detector.
Biología Computacional PLoS, 9(11), e1003330.

zhang, w., & Linden, D. j. (2003). The other side of the engram:
intrinsic excitability.

Experience-driven changes in neuronal
Naturaleza Reseñas Neurociencia, 4, 886–900.

Neurociencia en red

199

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

t

/

/

mi
d
tu
norte
mi
norte
a
r
t
i
C
mi

pag
d

yo

F
/

/

/

/

/

4
1
1
7
4
1
8
6
6
8
2
2
norte
mi
norte
_
a
_
0
0
1
1
8
pag
d

.

t

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3Research image
Research image
Research image
Research image
Research image
Research image
Research image
Research image
Research image

Descargar PDF