文章
Communicated by Dirk Ostwald
Learning in Volatile Environments
With the Bayes Factor Surprise
Vasiliki Liakoni*
vasiliki.liakoni@epfl.ch
Alireza Modirshanechi*
alireza.modirshanechi@epfl.ch
Wulfram Gerstner
wulfram.gerstner@epfl.ch
Johanni Brea
johanni.brea@epfl.ch
École Polytechnique Fédérale de Lausanne, School of Computer and Communication
Sciences and School of Life Sciences, 1015 洛桑, 瑞士
Surprise-based learning allows agents to rapidly adapt to nonstationary
stochastic environments characterized by sudden changes. 我们表明
exact Bayesian inference in a hierarchical model gives rise to a surprise-
modulated trade-off between forgetting old observations and integrating
them with the new ones. The modulation depends on a probability ratio,
which we call the Bayes Factor Surprise, that tests the prior belief against
the current belief. We demonstrate that in several existing approximate
算法, the Bayes Factor Surprise modulates the rate of adaptation
to new observations. We derive three novel surprise-based algorithms,
one in the family of particle filters, one in the family of variational learn-
英, and one in the family of message passing, that have constant scaling
in observation sequence length and particularly simple update dynam-
ics for any distribution in the exponential family. Empirical results show
that these surprise-based algorithms estimate parameters better than
alternative approximate approaches and reach levels of performance
comparable to computationally more expensive algorithms. The Bayes
Factor Surprise is related to but different from the Shannon Surprise. 在
two hypothetical experiments, we make testable predictions for physio-
logical indicators that dissociate the Bayes Factor Surprise from the Shan-
non Surprise. The theoretical insight of casting various approaches as
surprise-based learning, as well as the proposed online algorithms, 可能
be applied to the analysis of animal and human behavior and to rein-
forcement learning in nonstationary environments.
*V.L. and A.M. made equal contributions to this article and are co-corresponding
authors.
神经计算 33, 269–340 (2021)
https://doi.org/10.1162/neco_a_01352
© 2021 麻省理工学院
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
/
e
d
你
n
e
C
哦
A
r
t
我
C
e
–
p
d
/
我
F
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
C
哦
_
A
_
0
1
3
5
2
p
d
.
/
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
270
V. Liakoni, A. Modirshanechi, 瓦. Gerstner, 和 J. Brea
1 介绍
Animals, 人类, and similarly reinforcement learning agents may safely
assume that the world is stochastic and stationary during some intervals of
time interrupted by change points. The position of leaves on a tree, a stock
market index, or the time it takes to travel from A to B in a crowded city
is often well captured by stationary stochastic processes for extended peri-
ods of time. Then sudden changes may happen, such that the distribution
of leaf positions becomes different due to a storm, the stock market index
is affected by the enforcement of a new law, or a blocked road causes addi-
tional traffic jams. The violation of an agent’s expectation caused by such
sudden changes is perceived by the agent as surprise, which can be seen as
a measure of how much the agent’s current belief differs from reality.
Surprise, with its physiological manifestations in pupil dilation (Nassar
等人。, 2012; Preuschoff, t Hart, & Einhauser, 2011) and EEG signals (火星
等人。, 2008; Modirshanechi, Kiani, & Aghajan, 2019; Ostwald et al., 2012),
is believed to modulate learning, potentially through the release of specific
neurotransmitters (Gerstner, Lehmann, Liakoni, Corneil, & Brea, 2018; 于 &
Dayan, 2005) so as to allow animals and humans to adapt quickly to sudden
变化. The quick adaptation to novel situations has been demonstrated
in a variety of learning experiments (贝伦斯, 伍尔里奇, 沃尔顿, & 匆忙-
值得, 2007; Glaze, Kable, & 金子, 2015; Heilbron & Meyniel, 2019; Nassar
等人。, 2012; Nassar, Wilson, Heasly, & 金子, 2010; 于 & Dayan, 2005). 这
bulk of computational work on surprise-based learning can be separated
into two groups. Studies in the field of computational neuroscience have
focused on biological plausibility with little emphasis on the accuracy of
学习 (Behrens et al., 2007; Bogacz, 2017; Faraji, Preuschoff, & Gerstner,
2018; 弗里斯顿, 2010; 弗里斯顿, FitzGerald, Rigoli, Schwartenbeck, & Pezzulo,
2017; Nassar et al., 2012; Nassar, Wilson, Heasly, & 金子, 2010; Ryali, Reddy,
& 于, 2018; Schwartenbeck, FitzGerald, Dolan, & 弗里斯顿, 2013; 于 & Dayan,
2005), whereas exact and approximate Bayesian online methods (Adams
& MacKay, 2007; Fearnhead & 刘, 2007) for change point detection and
parameter estimation have been developed without any focus on biologi-
cal plausibility (Aminikhanghahi & 厨师, 2017; Cummings, Krehbiel, Mei,
Tuo, & 张, 2018; 林, Sharpnack, Rinaldo, & Tibshirani, 2017; Masegosa
等人。, 2017; Wilson, Nassar, & 金子, 2010).
在这项工作中, we take a top-down approach to surprise-based learning.
We start with a generative model of change points similar to the one that has
been the starting point of multiple experiments (Behrens et al., 2007; Find-
令, Chopin, & Koechlin, 2019; Glaze et al., 2015; Heilbron & Meyniel, 2019;
Nassar et al., 2012, 2010; 于 & Dayan, 2005). We demonstrate that Bayesian
inference on such a generative model can be interpreted as modulation of
learning by surprise; we show that this modulation leads to a natural def-
inition of surprise that is different from but closely related to the Shannon
Surprise (Shannon, 1948). 而且, we derive three novel approximate
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
/
e
d
你
n
e
C
哦
A
r
t
我
C
e
–
p
d
/
我
F
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
C
哦
_
A
_
0
1
3
5
2
p
d
.
/
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
Learning in Volatile Environments With the Bayes Factor Surprise
271
online algorithms with update rules that inherit the surprise-modulated
adaptation rate of exact Bayesian inference. The overall goal of this article
is to give a Bayesian interpretation for surprise-based learning in the brain
and to find approximate methods that are computationally efficient and bi-
ologically plausible while maintaining a high level of learning accuracy. 作为
a by-product, our approach provides theoretical insights on commonalities
and differences among existing surprise-based and approximate Bayesian
方法.
重要的, our approach makes specific experimental
预测.
In section 2, we introduce the generative model and we present our
surprise-based interpretation of Bayesian inference and our three approxi-
mate algorithms. 下一个, we use simulations to compare our algorithms with
existing ones on two different tasks inspired by and closely related to real
实验 (Behrens et al., 2007; 火星等人。, 2008; Nassar et al., 2012, 2010;
Ostwald et al., 2012). At the end of section 2, we formalize two experimen-
tally testable predictions of our theory and illustrate them with simulations.
A brief review of related studies as well as a few directions for further work
are supplied in section 3.
2 结果
In order to study learning in an environment that exhibits occasional and
abrupt changes, we consider a hierarchical generative model (见图
1A) in discrete time, similar to existing model environments (Behrens et al.,
2007; Nassar et al., 2012, 2010; 于 & Dayan, 2005). At each time point t, 这
observation Yt = y comes from a distribution with the time-invariant like-
lihood PY (y|我 ) parameterized by (西德:3)t = θ , where both y and θ can be mul-
tidimensional. 一般来说, we indicate random variables by capital letters
and values by lowercase letters. Whenever there is no risk of ambiguity, 我们
drop the explicit notation of random variables to simplify notation. Abrupt
changes of the environment correspond to sudden changes of the parameter
θt. At every time t, there is a change probability pc ∈ (0, 1) for the parameter
θt to be drawn from its prior distribution π (0) independent of its previous
价值, and a probability 1 − pc to stay the same as θt−1. A change at time
t is specified by the event Ct = 1; 否则, Ct = 0. 所以, the gener-
ative model can be formally defined, for any T ≥ 1, as a joint probability
distribution over (西德:3)
, . . . , (西德:3)时间 ), C1:时间 , and Y1:T as
1:T ≡ ((西德:3)
1
磷(c1:时间 , 我
1:时间 , y1:时间 ) =P(c1)磷(我
1)磷(y1
|我
1)
时间(西德:2)
t=2
磷(ct )磷(θt|ct, θt−1)磷(yt|θt ),
(2.1)
where P(我
1) = π (0)(我
1), 磷(c1) = δ(c1
- 1), 和
磷(ct ) = Bernoulli(ct; 个人电脑),
(2.2)
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
/
e
d
你
n
e
C
哦
A
r
t
我
C
e
–
p
d
/
我
F
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
C
哦
_
A
_
0
1
3
5
2
p
d
.
/
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
272
V. Liakoni, A. Modirshanechi, 瓦. Gerstner, 和 J. Brea
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
/
e
d
你
n
e
C
哦
A
r
t
我
C
e
–
p
d
/
我
F
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
C
哦
_
A
_
0
1
3
5
2
p
d
.
/
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
= 1, the parameter (西德:3)
= θ , the observation Yt
数字 1: Nonstationary environment. (A) 生成模型. At each time
∈ (0, 1) for a change in the environment. 什么时候
point t there is a probability pc
there is a change in the environment, 那是, CT
t is drawn
from its prior distribution π (0), independent of its previous value. Otherwise the
t retains its value from the previous time step t − 1. Given a parame-
value of (西德:3)
= y is drawn from a probability distribution
ter value (西德:3)
t
PY (y|我 ). We indicate random variables by capital letters, and values by lower-
case letters. (乙) Example of a nonstationary environment. Your friend meets you
every day at the coffee shop (blue dot) starting after work from her office (或者-
ange dot) and crossing a river. The time of arrival of your friend is the observed
variable Yt, which due to the traffic or your friend’s workload may exhibit some
variability but has a stable expectation (我 ). 如果, 然而, a new bridge is opened
= 1 where t is the moment of change), your friend no longer needs to take a
(CT
detour. 有, 然后, a sudden change in her observed daily arrival times.
磷(θt|ct, θt−1) =
(西德:3)
δ(θt − θt−1)
圆周率 (0)(θt )
如果
如果
ct = 0,
ct = 1,
磷(yt|θt ) = PY (yt|θt ).
(2.3)
(2.4)
Learning in Volatile Environments With the Bayes Factor Surprise
273
P stands for either probability density function (for the continuous vari-
埃布尔斯) or probability mass function (for the discrete variables), and δ is the
Dirac or Kronecker delta distribution, 分别.
Given a sequence of observations y1:t, the agent’s belief π (t)(我 ) about the
parameter (西德:3)t at time t is defined as the posterior probability distribution
磷((西德:3)t = θ |y1:t ). In the online learning setting studied here, the agent’s goal
is to update the belief π (t)(我 ) to the new belief π (t+1)(我 ), or an approximation
thereof, upon observing yt+1.
A simplified real-world example of such an environment is illustrated in
Figure 1B. Imagine that every day, a friend meets you at the coffee shop,
starting after work from her office (see Figure 1B, 左边). 这样做, she needs
to cross a river via a bridge. The time of arrival of your friend (yt) exhibits
some variability due to various sources of stochasticity (例如, traffic and your
friend’s daily workload), but it has a stable average over time (θt). 然而,
if a new bridge is opened, your friend arrives earlier, since she no longer
has to take a detour (see Figure 1B, 正确的). The moment of opening the new
= 1 in our framework and the sudden change in
bridge is indicated by ct+1
the average arrival time of your friend by a sudden change from θt to θt+1.
Even without any explicit discussion with your friend about this situation
and only by observing her actual arrival time, you can notice the abrupt
change and hence adapt your schedule to the new situation.
2.1 Online Bayesian Inference Modulated by Surprise. 根据
the definition of the hierarchical generative model (see Figure 1A and equa-
系统蒸发散 2.1 到 2.4), the value yt+1 of the observation at time t + 1 depends only
on the parameters θt+1, 并且是 (given θt+1) independent of earlier observa-
tions and earlier parameter values. We exploit this Markovian property and
update, using Bayes’ rule, the belief π (t)(我 ) ≡ P((西德:3)t = θ |y1:t ) at time t to the
new belief at time t + 1:
圆周率 (t+1)(我 ) = PY (yt+1
|我 )磷((西德:3)t+1
|y1:t )
磷(yt+1
= θ |y1:t )
.
(2.5)
迄今为止, 方程 2.5 remains rather abstract. The aim of this section is to
rewrite it in the form of a surprise-modulated recursive update. 首先
term in the numerator of equation 2.5 is the likelihood of the current ob-
servation given the parameter (西德:3)t+1
= θ , and the second term is the agent’s
estimated probability distribution of (西德:3)t+1 before observing yt+1. 因为
there is always the possibility of an abrupt change, the second term is
not the agent’s previous belief π (t) but P((西德:3)t+1
= θ |y1:t ) = (1 − pc)圆周率 (t)(我 ) +
pcπ (0)(我 ). 因此, it is possible to find a recursive formula for updating
the belief. For the derivation of this recursive rule, we define the following
条款.
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
/
e
d
你
n
e
C
哦
A
r
t
我
C
e
–
p
d
/
我
F
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
C
哦
_
A
_
0
1
3
5
2
p
d
.
/
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
274
V. Liakoni, A. Modirshanechi, 瓦. Gerstner, 和 J. Brea
Definition 1. The probability or density (for discrete and continuous variables
分别) of observing y with a belief π (t(西德:5)
(西德:4)
) is denoted as
磷(y; 圆周率 (t(西德:5)
)) =
PY (y|我 )圆周率 (t(西德:5)
)(我 )dθ .
(2.6)
)) =P(Yt(西德:5)+1
)) for an arbitrary ˆπ (t(西德:5)
Note that if π is the exact Bayesian belief defined as above in equation 2.5,
= 0). In section 2.2 we will also use
). Two particularly interesting cases of equa-
; 圆周率 (t)), the probability of a new observation yt+1 with the
; 圆周率 (0)), the probability of a new observation
then P(y; 圆周率 (t(西德:5)
磷(y; ˆπ (t(西德:5)
的 2.6 are P(yt+1
current belief π (t), 和P(yt+1
yt+1 with the prior belief π (0).
= y|y1:t(西德:5) , ct(西德:5)+1
Definition 2. The Bayes Factor Surprise SBF of the observation yt+1 is defined as
= 1 (IE。, when there is a
the ratio of the probability of observing yt+1 given ct+1
= 0 (IE。, when there is no
改变), to the probability of observing yt+1 given ct+1
改变), 那是,
SBF(yt+1
; 圆周率 (t)) =P(yt+1
磷(yt+1
; 圆周率 (0))
; 圆周率 (t))
.
(2.7)
This definition of surprise measures how much more probable the cur-
rent observation is under the naive prior π (0) relative to the current be-
lief π (t) (参见部分 3 for further interpretation). This probability ratio is
the Bayes factor (Efron & Hastie, 2016; Kass & 椽子, 1995) that tests the
prior belief π (0) against the current belief π (t). We emphasize that our def-
inition of surprise is not arbitrary, but essential in order to write the exact
inference in equation 2.5 on the generative model in the compact recursive
form indicated in the proposition that follows. 而且, as we show later,
this term can be identified in multiple learning algorithms (他们之中,
Nassar et al., 2010, 2012), but it has never been interpreted as a surprise
措施. In the following sections we establish the generality of this com-
putational mechanism and identify it as a common feature of many learning
算法.
Definition 3. Under the assumption of no change ct+1
recent belief π (t) as prior, the exact Bayesian update for π (t+1) is denoted as
= 0, and using the most
圆周率 (t+1)
乙
(我 ) = PY (yt+1
磷(yt+1
|我 )圆周率 (t)(我 )
; 圆周率 (t))
.
(2.8)
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
/
e
d
你
n
e
C
哦
A
r
t
我
C
e
–
p
d
/
我
F
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
C
哦
_
A
_
0
1
3
5
2
p
d
.
/
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
圆周率 (t+1)
乙
(我 ) describes the incorporation of the new information into the cur-
rent belief via Bayesian updating.
Learning in Volatile Environments With the Bayes Factor Surprise
275
Definition 4. The surprise-modulated adaptation rate is a function γ : R+ ×
R+ → [0, 1] specified as
C (S, 米) = mS
1 + mS
,
(2.9)
where S ≥ 0 is a surprise value and m ≥ 0 is a parameter controlling the effect of
surprise on learning.
Using the above definitions and equation 2.5, we have for the generative
model of Figure 1A and equations 2.1 到 2.4 the following proposition:
Proposition. Exact Bayesian inference on the generative model is equivalent to
the recursive update rule,
圆周率 (t+1)(我 ) =
(西德:5)
(西德:5)
1 − γ
(西德:5)
S(t+1)
BF
,
个人电脑
1 − pc
(西德:6)
(西德:6)(西德:6)
圆周率 (t+1)
乙
(我 )
+ C
S(t+1)
BF
,
个人电脑
1 − pc
磷(我 |yt+1),
哪里(t+1)
BF
= SBF(yt+1
; 圆周率 (t)) is the Bayes Factor Surprise, 和
磷(我 |yt+1) = PY (yt+1
磷(yt+1
|我 )圆周率 (0)(我 )
; 圆周率 (0))
(2.10)
(2.11)
is the posterior if we take yt+1 as the only observation.
The proposition indicates that the exact Bayesian inference on the gen-
erative model discussed above (见图 1) leads to an explicit trade-off
之间 (1) integrating a new observation ynew (corresponding to yt+1) 和
the old belief π old (corresponding to π (t)) into a distribution π integration (科尔-
responding to π (t+1)
) 和 (2) forgetting the past observations, so as to restart
with the belief π reset (corresponding to P(我 |yt+1)) which relies only on the
new observation and the prior π (0):
乙
π new(我 ) = (1 − γ ) π integration(我 |ynew, π old) + γ π reset(我 |ynew, 圆周率 (0)).
(2.12)
This trade-off is governed by a surprise-modulated adaptation rate
C (S, 米) ∈ [0, 1], where S = SBF
≥ 0 (corresponding to the Bayes Factor Sur-
prise) can be interpreted as the surprise of the most recent observation, 和
m = pc
≥ 0 is a parameter controlling the effect of surprise on learning.
1−pc
Because the parameter of modulation m is equal to pc
, for a fixed value of
1−pc
surprise S, the adaptation rate γ is an increasing function of pc. 所以, 在
more volatile environments, the same value of surprise S leads to a higher
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
/
e
d
你
n
e
C
哦
A
r
t
我
C
e
–
p
d
/
我
F
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
C
哦
_
A
_
0
1
3
5
2
p
d
.
/
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
276
V. Liakoni, A. Modirshanechi, 瓦. Gerstner, 和 J. Brea
adaptation rate than in a less volatile environment; in the case of pc → 1,
any surprise value leads to full forgetting, γ = 1.
As a conclusion, our first main result is that a split as in equation 2.12
with a weighting factor (“adaptation rate” γ ) as in equation 2.9 is exact and
always possible for the class of environments defined by our hierarchical
generative model. This surprise modulation gives rise to specific testable
experimental predictions discussed later.
2.2 Approximate Algorithms Modulated by Surprise. Despite the sim-
plicity of the recursive formula in equation 2.10, the updated belief π (t+1) 是
generally not in the same family of distributions as the previous belief π (t),
例如, the result of averaging two normal distributions is not a nor-
mal distribution. Hence it is in general impossible to find a simple and exact
update rule for, 说, some sufficient statistic. 作为结果, 内存-
ory demands for π (t+1) scale linearly in time, and updating π (t+1) using π (t)
needs O(t) 运营. 在以下部分中, we investigate three ap-
proximations, 算法 1 到 3, that have simple update rules and finite
memory demands, so that the updated belief remains tractable over a long
sequence of observations.
As our second main result, we show that all our three approximate al-
gorithms inherit the surprise-modulated adaptation rate from the exact
Bayesian approach, 方程 2.9 和 2.12. The first algorithm adapts an
earlier algorithm of surprise minimization learning (SMiLe, Faraji et al.,
2018) to variational learning. We refer to our novel algorithm as Variational
SMiLe and abbreviate it as VarSMiLe (see algorithm 1). The second algo-
rithm is based on message passing (Adams & MacKay, 2007) restricted to
a finite number of messages N. We refer to this algorithm as MPN (see al-
gorithm 2). The third algorithm uses the ideas of particle filtering (Gordon,
Salmond, & 史密斯, 1993) for an efficient approximation for our hierarchical
generative model. We refer to our approximate algorithm as Particle Fil-
tering with N particles and abbreviate it by pfN (see algorithm 3). All al-
gorithms are computationally efficient, have finite memory demands, 和
are biologically plausible; Particle Filtering has possible neuronal imple-
mentations (黄 & 饶, 2014; Kutschireiter, Surace, Sprekeler, & Pfister,
2017; Legenstein & Maass, 2014; Shi & Griffiths, 2009), MPN can be seen
as a greedy version of pfN without sampling, and Variational SMiLe may
be implemented by neo-Hebbian (Gerstner et al., 2018; Lisman, Grace, &
Duzel, 2011) update rules. Simulation results show that the performance of
the three approximate algorithms is comparable to and more robust across
environments than other state-of-the-art approximations.
2.2.1 Variational SMiLe Rule: Algorithm 1. A simple heuristic approxi-
mation to keep the updated belief in the same family as the previous be-
liefs consists in applying the weighted averaging of the exact Bayesian up-
date rule (参见方程 2.10) to the logarithm of the beliefs rather than the
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
/
e
d
你
n
e
C
哦
A
r
t
我
C
e
–
p
d
/
我
F
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
C
哦
_
A
_
0
1
3
5
2
p
d
.
/
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
Learning in Volatile Environments With the Bayes Factor Surprise
277
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
/
e
d
你
n
e
C
哦
A
r
t
我
C
e
–
p
d
/
我
F
/
beliefs themselves:
(西德:7)
(西德:8)
ˆπ (t+1)(我 )
日志
= (1 − γt+1) 日志
(西德:7)
ˆπ (t+1)
乙
(西德:8)
(我 )
+ γt+1 log
(西德:8)
(西德:7)
磷(我 |yt+1)
+ Const.,
(2.13)
(西德:7)
(西德:8)
; ˆπ (t)), 米
= γ
SBF(yt+1
where γt+1
is given by equation 2.9 with a free pa-
rameter m > 0, which can be tuned to each environment. 通过这样做, 我们
still have the explicit trade-off between two terms as in equation 2.10, 但
in the logarithms, yet an advantageous consequence of averaging over log-
arithms is that if the likelihood function PY is in the exponential family and
the initial belief π (0) is its conjugate prior, then ˆπ (t+1) and π (0) are members
of the same family. In this particular case, we arrive at a simple update rule
for the parameters of ˆπ (t+1) (see algorithm 1 for pseudocode and section 4
欲了解详情). As it is common in variational approaches (Beal, 2003), 价格
of this simplicity is that except for the trivial cases of pc = 0 and pc = 1, 那里
is no evidence other than simulations that the update rule of equation 2.13
will end up at an approximate belief close to the exact Bayesian belief.
One way to interpret the update rule of equation 2.13 is to rewrite it as
the solution of a constraint optimization problem. The new belief ˆπ (t+1) 是一个
variational approximation of the Bayesian update ˆπ (t+1)
(参见部分 4)
乙
ˆπ (t+1)(我 ) = arg min
q
(西德:9)
q(我 )|| ˆπ (t+1)
乙
(西德:10)
(我 )
,
DKL
(2.14)
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
C
哦
_
A
_
0
1
3
5
2
p
d
.
/
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
278
V. Liakoni, A. Modirshanechi, 瓦. Gerstner, 和 J. Brea
with a family of functions q(我 ) constrained by the Kullback-Leibler diver-
根杰斯,
(西德:10)
(西德:9)
q(我 )||磷(我 |yt+1)
DKL
≤ Bt+1
,
(2.15)
where the bound Bt+1
tion of the Bayes Factor Surprise SBF(yt+1
磷(我 |yt+1) is given by equation 2.11.
乙
(西德:9)
0, DKL[ ˆπ (t+1)
(西德:10)
(我 )||磷(我 |yt+1)]
∈
is a decreasing func-
; ˆπ (t)) (参见部分 4 for proof), 和
Because of the similarity of the constraint optimization problem in equa-
系统蒸发散 2.14 和 2.15 to the Surprise Minimization Learning rule, (SMiLe)
(Faraji et al., 2018), we call this algorithm the Variational Surprise Minimiza-
tion Learning rule, or for short, the Variational SMiLe rule. The differences
between SMiLe and variational SMiLe are discussed in section 4.
Our variational method, and particularly its surprise-modulated adapta-
tion rate, is complementary to earlier studies (Masegosa et al., 2017; Özkan,
Šmídl, Saha, Lundquist, & Gustafsson, 2013) in machine learning that as-
sumed different generative models and used additional assumptions and
different approaches for deriving the learning rule.
2.2.2 Message-Passing N: Algorithm 2. For a hierarchical generative
model similar to ours, a message-passing algorithm has been used to per-
form exact Bayesian inference (Adams & MacKay, 2007), where the algo-
rithm’s memory demands and computational complexity scale linearly in
time t. 在这个部分, we first explain the idea of the message-passing al-
gorithm of Adams and MacKay (2007) and its relation to our proposition.
We then present our approximate version of this algorithm, 其中有一个
持续的 (in time) computational complexity and memory demands.
The history of change points up to time t is a binary sequence, 对于前任-
充足, c1:t = {1, 0, 0, 1, 0, 1, 1}, where the value 1 indicates a change in the
corresponding time step. Following the idea of Adams and MacKay (2007),
we define the random variable Rt = min{n ∈ N : Ct−n+1
= 1} in order to de-
scribe the time since the last change point, which takes values between 1
and t. We can write the exact Bayesian expression for π (t)(我 ) by marginaliz-
ing P((西德:3)t = θ , rt|y1:t ) over the t possible values of rt in the following way:
圆周率 (t)(我 ) =
t−1(西德:11)
k=0
磷(Rt = t − k|y1:t )磷((西德:3)t = θ |Rt = t − k, y1:t ).
(2.16)
For consistency with algorithm 3 (i.e. Particle Filtering), we call each term in
the sum of equation 2.16 a “particle” and denote as π (t)
k (我 ) =P((西德:3)t = θ |Rt =
t − k, y1:t ) the belief of the particle corresponding to Rt = t − k, and w(k)
=
t
磷(Rt = t − k|y1:t ) its corresponding weight at time t:
圆周率 (t)(我 ) =
t−1(西德:11)
k=0
w(k)
t
圆周率 (t)
k (我 ).
(2.17)
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
/
e
d
你
n
e
C
哦
A
r
t
我
C
e
–
p
d
/
我
F
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
C
哦
_
A
_
0
1
3
5
2
p
d
.
/
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
Learning in Volatile Environments With the Bayes Factor Surprise
279
For each particle, the term π (t)
k (我 ) is simple to compute, because when rt is
已知的, inference depends only on the observations after the last change
观点. 所以, the goal of online inference is to find an update rule for
the evolution of the weights w(k)
t over time.
We can apply the exact update rule of our proposition (参见方程 2.10)
to the belief expressed in the form of equation 2.17. Upon each observation
of a new sample yt+1, a new particle is generated and added to the set of
particles, corresponding to P(我 |yt+1) (那是, π reset), modeling the possibil-
ity of a change point occurring at t + 1. According to the proposition, 这
weight of the new particle (k = t) is equal to
,
(2.18)
w(t)
t+1
= γt+1
(西德:7)
(西德:8)
= γ
; 圆周率 (t)),
where γt+1
SBF(yt+1
(比照. 方程 2.9). The other t particles
coming from π (t) correspond to π (t+1)
(那是, π integration) in the proposition.
The update rule (参见部分 4 for derivation) for the weights of these parti-
克莱斯 (0 ≤ k ≤ t − 1) 是
个人电脑
1−pc
乙
w(k)
t+1
= (1 − γt+1)w(k)
乙,t+1
= (1 − γt+1)
磷(yt+1
磷(yt+1
; 圆周率 (t)
k )
; 圆周率 (t))
w(k)
t
.
(2.19)
迄今为止, we used the idea of Adams and MacKay (2007) to write the belief as in
方程 2.17 and used our proposition to arrive at the surprise-modulated
update rules in equations 2.18 和 2.19.
t
The computational complexity and memory requirements of the com-
plete message-passing algorithm increase linearly with time t. To deal with
this issue and to have a constant computation and memory demands over
时间, we implemented a message-passing algorithm of the form of equa-
系统蒸发散 2.17 到 2.19, but with a fixed number N of particles, chosen as those
with the highest weights w(k)
. 所以, our second algorithm adds a
new approximation step to the full message-passing algorithm of Adams
and MacKay (2007): Whenever t > N, after adding the new particle with
the weight as in equation 2.18 and updating the previous weights as in
方程 2.19, we discard the particle with the smallest weight (IE。, 放
its weight equal to 0) and renormalize the weights. 通过这样做, we al-
ways keep the number of particles with nonzero weights equal to N. 笔记
that for t ≤ N, our algorithm is exact and identical to the message-passing
algorithm of Adams and MacKay (2007). We call our modification of the
message-passing algorithm of Adams and MacKay (2007) “Message Pass-
ing N (MPN).”
To deal with the computational complexity and memory requirements,
one may alternatively keep only the particles with weights greater than
a cut-off threshold (Adams & MacKay, 2007). 然而, such a constant
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
/
e
d
你
n
e
C
哦
A
r
t
我
C
e
–
p
d
/
我
F
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
C
哦
_
A
_
0
1
3
5
2
p
d
.
/
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
280
V. Liakoni, A. Modirshanechi, 瓦. Gerstner, 和 J. Brea
cut-off leads to a varying number (smaller or equal to t) of particles in time.
Our approximation MPN can therefore be seen as a variation of the thresh-
olding algorithm in Adams and MacKay (2007) with fixed number of parti-
cles N, and hence a variable cut-off threshold. The work of Fearnhead and
刘 (2007) follows the same principle as Adams and MacKay (2007), but em-
ploys stratified resampling to eliminate particles with negligible weights in
order to reduce the total number of particles. Their resampling algorithm
involves solving a complicated nonlinear equation at each time step, 哪个
makes it unsuitable for a biological implementation. 此外, we experi-
enced that in some cases, the small errors introduced in the resampling step
of the algorithm of Fearnhead and Liu (2007) accumulated and led to worse
performance than our MPN algorithm, which simply keeps the N particles
with the highest weight at each time step.
For the case where the likelihood function PY (y|我 ) is in the exponential
family and π (0) is its conjugate prior, the resulting algorithm of MPN has
a simple update rule for the belief parameters (see algorithm 2 and sec-
的 4 欲了解详情). 用于比较, we also implemented in our simulations
the full message-passing algorithm of Adams and MacKay (2007) with an
almost zero cut-off (machine precision), which we consider as our bench-
mark “Exact Bayes,” as well as the stratified optimal resampling algorithm
of Fearnhead and Liu, which we call “SORN.”
2.2.3 Particle Filtering: Algorithm 3. 方程 2.17 demonstrates that the
exact Bayesian belief π (t) can be expressed as the marginalization of P((西德:3)t =
我 , rt|y1:t ) over the time since the last change point rt. Equivalently, 一
can compute the exact Bayesian belief as the marginalization of P((西德:3)t =
我 , c1:t|y1:t ) over the history of change points c1:t:
圆周率 (t)(我 ) =
(西德:11)
c1:t
磷(c1:t|y1:t )磷((西德:3)t = θ |c1:t, y1:t )
= E
磷(C1:t |y1:t )
(西德:9)
(西德:10)
磷((西德:3)t = θ |C1:t, y1:t )
.
(2.20)
The idea of our third algorithm is to approximate this expectation by par-
ticle filtering, 那是, sequential Monte Carlo sampling (Doucet, Godsill, &
Andrieu, 2000; Gordon et al., 1993) from P(C1:t|y1:t ).
We then approximate π (t) 经过
ˆπ (t)(我 ) =
氮(西德:11)
我=1
w(我)
t
ˆπ (t)
我
(我 ) =
氮(西德:11)
我=1
t P((西德:3)t = θ |C(我)
w(我)
1:t
, y1:t ),
(2.21)
在哪里 {C(我)
1:t
ticles) drawn from a proposal distribution Q(c1:t|y1:t ), {w(我)
t
}氮
i=1 is a set of N realizations (or samples) of c1:t (IE。, N par-
}氮
i=1 are their
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
/
e
d
你
n
e
C
哦
A
r
t
我
C
e
–
p
d
/
我
F
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
C
哦
_
A
_
0
1
3
5
2
p
d
.
/
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
Learning in Volatile Environments With the Bayes Factor Surprise
281
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
/
e
d
你
n
e
C
哦
A
r
t
我
C
e
–
p
d
/
我
F
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
C
哦
_
A
_
0
1
3
5
2
p
d
.
/
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
, y1:t ) is the ap-
corresponding weights at time t, and ˆπ (t)
proximate belief corresponding to particle i.
我
(我 ) =P((西德:3)t = θ |C(我)
1:t
Upon observing yt+1, the update procedure for the approximate belief
ˆπ (t+1) of equation 2.21 includes two steps: (1) updating the weights and
(2) sampling the new hidden state ct+1 for each particle. The two steps
are coupled together through the choice of the proposal distribution Q, 为了
which we choose the optimal proposal function (Doucet et al., 2000; 看
部分 4). 因此, given this choice of proposal function, we show (看
282
V. Liakoni, A. Modirshanechi, 瓦. Gerstner, 和 J. Brea
部分 4) that the first step amounts to
w(我)
t+1
= (1 − γt+1)w(我)
乙,t+1
+ γt+1
w(我)
t
,
w(我)
乙,t+1
=P(yt+1
磷(yt+1
(西德:7)
; ˆπ (t)
)
我
; ˆπ (t))
w(我)
t
,
(2.22)
(西德:8)
; ˆπ (t)), 米
= γ
乙,t+1
SBF(yt+1
with m = pc
1−pc
(比照. 方程 2.9), 和
i=1 are the weights corresponding to the Bayesian update ˆπ (t+1)
}氮
where γt+1
{w(我)
的
方程 2.8 (参见部分 4). In the second step, we update each particle’s his-
tory of change points by going from the sequence {C(我)
}氮
我=1, 为了
1:t
which we always keep the old sequence up to time t, and for each particle
我, we add a new element c(我)
, 0]
t+1
= [C(我)
or change c(我)
, 1]. 笔记, 然而, that it is not needed to keep the
1:t
whole sequences c(我)
t+1 to update
to ˆπ (t+1)
ˆπ (t)
t+1 from the optimal proposal
我
我
distribution Q(C(我)
, y1:t+1) (Doucet et al., 2000), which is given by (看
t+1
部分 4)
1:t+1 in memory, but instead one can use c(我)
∈ {0, 1} representing no change c(我)
. We sample the new element c(我)
i=1 to {C(我)
}氮
= [C(我)
1:t
|C(我)
1:t
1:t+1
1:t+1
1:t+1
乙
问(C(我)
t+1
= 1|C(我)
1:t
, y1:t+1) = γ
SBF(yt+1
; ˆπ (t)
我
),
(西德:5)
(西德:6)
.
个人电脑
1 − pc
(2.23)
有趣的是, the above formulas entail the same surprise modulation
and the same trade-off as proposed by the proposition’s equation 2.10. 为了
the weight update, there is a trade-off between an exact Bayesian update
and keeping the value of the previous time step, controlled by a adapta-
tion rate modulated exactly in the same way as in equation 2.10. 注意
in contrast to equation 2.10, the trade-off for the particles’ weights is not
between forgetting and integrating, but between maintaining the previous
knowledge and integrating. 然而, the change probability (参见方程
2.23) for sampling is equal to the adaptation rate and is an increasing func-
tion of surprise. 因此, although the weights are updated less for sur-
prising events, a higher surprise causes a higher probability for change,
indicated by c(我)
= 1, which implies forgetting, because for a particle i
t+1
with c(我)
, y1:t+1)
=P((西德:3)t+1
t+1
is equal to P((西德:3)t+1
= 1, yt+1) =P(我 |yt+1) (see Figure 1A), 这是
equivalent to a reset of the belief as in equation 2.12. 换句话说, 尽管
in MPN and the exact Bayesian inference in the proposition’s equation
2.10, the trade-off between integration and reset is accomplished by adding
at each time step a new particle with weight γt+1, in Particle Filtering, 它
is accomplished via sampling. As a conclusion, the above formulas are
essentially the same as the update rules of MPN (see equations 2.19 和
= 1, the associated belief ˆπ (t+1)
= θ |C(我)
t+1
= θ |C(我)
t+1
= 1, C(我)
1:t
我
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
/
e
d
你
n
e
C
哦
A
r
t
我
C
e
–
p
d
/
我
F
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
C
哦
_
A
_
0
1
3
5
2
p
d
.
/
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
Learning in Volatile Environments With the Bayes Factor Surprise
283
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
/
e
d
你
n
e
C
哦
A
r
t
我
C
e
–
p
d
/
我
F
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
C
哦
_
A
_
0
1
3
5
2
p
d
.
/
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
2.18) and have the same spirit as the recursive update of the proposition’s
方程 2.10.
Equations 2.21 和 2.23 can be applied to the case where the likelihood
function PY (y|我 ) is in the exponential family and π (0) is its conjugate prior.
The resulting algorithm (algorithm 3) has a particularly simple update rule
for the belief parameters (参见部分 4 欲了解详情).
284
V. Liakoni, A. Modirshanechi, 瓦. Gerstner, 和 J. Brea
The theory of particle filter methods is well established (Doucet et al.,
2000; Gordon et al., 1993; Särkkä, 2013). Particle filters in simpler (棕色的
& Steyvers, 2009) or more complex (Findling et al., 2019) forms have also
been employed to explain human behavior. Here we derived a simple par-
ticle filter for the general case of generative models of equations 2.2 到 2.4.
Our main contribution is to show that the use of the optimal proposal dis-
tribution in this particle filter leads to a surprise-based update scheme.
2.2.4 Surprise-Modulation as a Framework for Other Algorithms. Other ex-
isting algorithms (Adams & MacKay, 2007; Faraji et al., 2018; Fearnhead &
刘, 2007; Nassar et al., 2012, 2010) can also be formulated in the surprise-
modulation framework of equations 2.9 和 2.12 (参见部分 4). 而且,
in order to allow for a transparent discussion and for fair comparisons in
simulations, we extended the algorithms of Nassar et al. (2012, 2010) 到一个
more general setting. Here we give a brief summary of the algorithms we
经过考虑的. A detailed analysis is provided in section 4.5.
The algorithms of Nassar et al. (2012, 2010) were originally designed for
a gaussian estimation task (参见部分 2.3 for details of the task) 与一个
broad uniform prior. We extended them to the more general case of gaus-
sian tasks with gaussian priors, and we call our extended versions Nas10∗
and Nas12∗ for Nassar et al. (2010) and Nassar et al. (2012), 分别
(for a performance comparison between our extended algorithms and their
original versions see Figures 12 和 13). Both algorithms have the same
surprise-modulation as in our proposition (参见方程 2.10). 有
multiple interpretations of the approaches of Nas10∗ and Nas12∗ and links
to other algorithms. One such link we identify is in relation to Particle Fil-
tering with a single particle (pf1). 进一步来说, one can show that pf1
behaves in expectation similar to Nas10∗ and Nas12∗ (参见部分 4 和
appendix).
2.3 Simulations. With the goal of gaining a better understanding of
different approximate algorithms, we evaluated the departure of their
performance from the exact Bayesian algorithm in terms of mean squared
错误 (均方误差) of estimating (西德:3)t (参见部分 4), on two tasks inspired by and
closely related to real experiments (Behrens et al., 2007; Maheu, 德阿内,
& Meyniel, 2019; 火星等人。, 2008; Nassar et al., 2012, 2010; Ostwald et al.,
2012): a gaussian and a categorical estimation task.
We compared our three novel algorithms VarSMiLe, Particle Filtering
(pfN, where N is the number of particles), and Message Passing with fi-
nite number of particles N (MPN) to the online exact Bayesian Message
Passing algorithm (Adams & MacKay, 2007) (Exact Bayes), which yields the
optimal solution with ˆ(西德:3)t = ˆ(西德:3)Opt
. 此外, we included in the compar-
ison the stratified optimal resampling algorithm (Fearnhead & 刘, 2007)
(SORN, where N is the number of particles), our variant of Nassar et al.
t
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
/
e
d
你
n
e
C
哦
A
r
t
我
C
e
–
p
d
/
我
F
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
C
哦
_
A
_
0
1
3
5
2
p
d
.
/
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
Learning in Volatile Environments With the Bayes Factor Surprise
285
∗
∗
) and Nassar et al. (2012) (Nas12
(2010) (Nas10
), the Surprise-Minimization
Learning algorithm of Faraji et al. (2018) (SMiLe), as well as a simple Leaky
Integrator (Leaky; 参见部分 4).
The algorithms Exact Bayes and SORN come from the field of change
point detection, and whereas the former has high memory demands, 这
latter has the same memory demands as our algorithms pfN and MPN. 这
∗
algorithms Nas10
, and SMiLe, 另一方面, come from the
human learning literature and are more biologically oriented.
, Nas12
∗
∗
|μt+1
2.3.1 Gaussian Estimation Task. The task is a generalized version of the
experiment of Nassar et al. (2012, 2010). The goal of the agent is to estimate
the mean θt = μt of observed samples, which are drawn from a gaussian
distribution with known variance σ 2: yt+1
, σ 2). The mean
μt+1 is itself drawn from a gaussian distribution μt+1
∼ N (0, 1) 每当
the environment changes. 换句话说, the task is a special case of the
generative model of equations 2.2 到 2.4, with π (0)(μt ) = N (μt; 0, 1) 和
PY (yt|μt ) = N (yt; μt, σ 2). An example of the task is shown in Figure 2A.
∼ N (μt+1
∗
and Nas12
We simulated the task for all combinations of σ ∈ {0.1, 0.5, 1, 2, 5} 和
pc ∈ {0.1, 0.05, 0.01, 0.005, 0.001, 0.0001}. For each combination of σ and pc,
we first tuned the free parameter of each algorithm—m for SMiLe and Vari-
ational SMiLe, the leak parameter for the Leaky Integrator, and the pc of
Nas10
—by minimizing the MSE on three random initializa-
tions of the task. For the Particle Filter (pfN), the Exact Bayes, the MPN, 和
the SORN, we empirically checked that the true pc of the environment was
indeed the value that gave the best performance, and we used this value for
the simulations. We evaluated the performance of the algorithms on 10 差异-
ferent random task instances of 105 steps each for pc ∈ {0.1, 0.05, 0.01, 0.005}
和 106 steps each for pc ∈ {0.001, 0.0001} (in order to sample more change
点). Note that the parameter σ is not estimated, and its actual value is
used by all algorithms except the Leaky Integrator.
In Figure 2B we show the MSE[ ˆ(西德:3)t|Rt = n] in estimating the parameter
after n steps since the last change point, for each algorithm, computed over
multiple changes, for two exemplar task settings. The Particle Filter with
∗
20 particles (pf20), the VarSMiLe, and the Nas12
have an overall perfor-
mance very close to that of the Exact Bayes algorithm (均方误差[ ˆ(西德:3)Opt
|Rt = n]),
with much lower memory requirements. VarSMiLe sometimes slightly out-
performs the other two early after an environmental change (see Figure 2B,
正确的), but shows slightly higher error values at later phases. The MPN algo-
rithm is the closest one to the optimal solution (IE。, Exact Bayes) for low σ
(see Figure 2B, 左边), but its performance is much worse for the case of high
σ and low pc (see Figure 2B, 正确的). For SORN we observe a counter intu-
itive behavior in the regime of low σ : the inclusion of more particles leads
to worse performance (see Figure 2B, 左边). At higher σ levels, the perfor-
mance of SOR20 is close to optimal and better than the MP20 in later time
t
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
/
e
d
你
n
e
C
哦
A
r
t
我
C
e
–
p
d
/
我
F
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
C
哦
_
A
_
0
1
3
5
2
p
d
.
/
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
286
V. Liakoni, A. Modirshanechi, 瓦. Gerstner, 和 J. Brea
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
/
e
d
你
n
e
C
哦
A
r
t
我
C
e
–
p
d
/
我
F
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
C
哦
_
A
_
0
1
3
5
2
p
d
.
/
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
− μ
t )2/2σ 2) with changing mean μ
t ). 在这个例子中, σ = 1 and pc
数字 2: Gaussian estimation task: Transient performance after changes. (A) 在
each time step an observation (depicted as black dot) is drawn from a gaussian
distribution ∼ exp(-(yt
t (marked in blue)
and known variance σ 2 (lower left panels). At every change of the environment
(marked with red lines), a new mean μ
t is drawn from a standard gaussian dis-
= 0.01. (乙) Mean squared
tribution ∼ exp(−μ2
error for the estimation of μ
t at each time step n after an environmental change,
= 0.1 (left panel)
= n] 随着时间的推移; σ = 0.1, 个人电脑
那是, the average of MSE[ ˆ(西德:3)
t
= 0.01 (right panel). The shaded area corresponds to the standard
and σ = 5, 个人电脑
error of the mean. Abbreviations: pfN: Particle Filtering with N particles; MPN:
Message Passing with N particles; VarSMiLe: Variational SMiLe; SORN: Strat-
isfied Optimal Resampling with N particles (Fearnhead & 刘, 2007); SMiLe:
Faraji et al. (2018); Nas10∗, Nas12∗: Variants of Nassar et al. (2010) and Nas-
sar et al. (2012), 分别; Leaky: Leaky Integrator, Exact Bayes: Adams and
MacKay (2007).
|Rt
脚步. This may be due to the fact that the MPN discards particles in a de-
terministic and greedy way (IE。, the one with the lowest weight), 然而
for the SORN, there is a component of randomness in the process of par-
ticle elimination, which may be important for environments with higher
stochasticity.
For the Leaky Integrator, we observe a trade-off between good perfor-
mance in the transient phase and the stationary phase; a fixed leak value
cannot fulfill both requirements. The SMiLe rule, by construction, never
narrows its belief ˆπ (我 ) below some minimal value, which allows it to have
a low error immediately after a change but leads later to high errors. 它是
∗
performance deteriorates for higher σ (see Figure 2B, 正确的). The Nas10
Learning in Volatile Environments With the Bayes Factor Surprise
287
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
/
e
d
你
n
e
C
哦
A
r
t
我
C
e
–
p
d
/
我
F
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
C
哦
_
A
_
0
1
3
5
2
p
d
.
/
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
数字 3: Gaussian estimation task: Steady-state performance. (A) 意思是
squared error of the Exact Bayes algorithm (IE。, optimal solution) for each com-
bination of σ and pc averaged over time. (B–F) Difference between the mean
squared error of each algorithm and the optimal solution (of panel A), 那是,
the average of (西德:8)均方误差[ ˆ(西德:3)
t] 随着时间的推移. The color bar of panel A applies to these
panels as well. Note that the black color for the MP20 indicates negative values,
which are due to the finite sample size for the estimation of MSE. Abbreviations:
See the Figure 2 caption.
performs well for low but not for higher values of σ . Despite the fact that
∗
a Particle Filter with one particle (pf1) is in expectation similar to Nas10
∗
and Nas12
(参见部分 4), it performs worse than these two algorithms on
trial-by-trial measures. 仍然, it performs better than the MP1 and identically
to the SOR1.
In Figure 3A, we have plotted the average of MSE[ ˆ(西德:3)Opt
] of the Exact
Bayes algorithm over the whole simulation time for each of the consid-
ered σ and pc levels. The difference between the other algorithms and this
benchmark is called (西德:8)均方误差[ ˆ(西德:3)t] (参见部分 4) and is plotted in Figures
3C to 3F. All algorithms except for the SOR20 have lower average error
t
288
V. Liakoni, A. Modirshanechi, 瓦. Gerstner, 和 J. Brea
values for low σ and low pc than high σ and high pc. The Particle Filter pf20
and the Message Passing MP20 have the smallest difference from the opti-
mal solution. The average error of MP20 is higher than that of pf20 for high
σ and low pc, whereas pf20 is more robust across levels of environmental
参数. The worst-case performance for pf20 is (西德:8)均方误差[ ˆ(西德:3)t] = 0.033 为了
σ = 5 and pc = 0.0001, and for SOR20, 这是 (西德:8)均方误差[ ˆ(西德:3)t] = 0.061 for σ = 0.1
and pc = 0.1. The difference between these two worst-case scenarios is sig-
−6, two-sample t-test, 10 random seeds for each
nificant (p-value = 2.79 × 10
∗
algorithm). Next in performance are the algorithms, Nas12
and VarSMiLe.
VarSMiLe exhibits its largest deviation from the optimal solution for high
σ and low pc, but is still more resilient compared to the MPN algorithms
for this type of environment. Among the algorithms with only one unit of
∗
memory demands—pf1, MP1, SOR1, VarSMiLe, SMiLe, Leaky, Nas10
和
Nas12
. The SOR20 has low error
overall but unexpectedly high error for environmental settings that are pre-
sumably more relevant for biological agents (intervals of low stochasticity
marked by abrupt changes). The simple Leaky Integrator performs well at
low σ and pc but deviates more from the optimal solution as these parame-
ters increase (see Figure 3F). The SMiLe rule performs best at lower σ , 那
是, more deterministic environments.
∗
—the winners are VarSMiLe and Nas12
A summary graph, where we collect the (西德:8)均方误差[ ˆ(西德:3)t] across all levels of σ
∗
and pc, is shown in Figure 4. We can see that pf20, Nas12
, and VarSMiLe
give the lowest worst case (lowest maximum value) (西德:8)均方误差[ ˆ(西德:3)t] and are sta-
tistically better than the other eight algorithms (the error bars indicate the
standard error of the mean across the ten random task instances).
∗
|pt+1
2.3.2 Categorical Estimation Task. The task is inspired by the experiments
of Behrens et al. (2007), Maheu et al. (2019), Mars et al. (2008), and Ostwald
等人. (2012). The goal of the agent is to estimate the occurrence probabil-
∈ {1, . . . , 5} is drawn from
ity of five possible states. Each observation yt+1
a categorical distribution with parameters θt+1
~
= pt+1, 那是, yt+1
= 1 in the environment, 这
猫(yt+1
parameters pt+1 are drawn from a Dirichlet distribution Dir(s · 1), 在哪里
s ∈ (0, ∞) is the stochasticity parameter. In relation to the generative model
of equations 2.2 到 2.4, we thus have π (0)(点 ) = Dir(点; s · 1) and PY (yt|点 ) =
猫(yt; 点 ). An illustration of this task is depicted in Figure 5A.
; pt+1). When there is a change Ct+1
We considered the combinations of stochasticity levels s ∈ {0.01, 0.1, 0.14,
0.25, 1, 2, 5} and change probability levels pc ∈ {0.1, 0.05, 0.01, 0.005, 0.001,
0.0001}. The algorithms of Nassar et al. (2012, 2010) were specifically devel-
oped for a gaussian estimation task and cannot be applied here. 所有其他
algorithms were first optimized for each combination of environmental pa-
rameters before an experiment starts, and then evaluated on 10 不同的
random task instances, 为了 105 steps each for pc ∈ {0.1, 0.05, 0.01, 0.005} 和
106 steps each for pc ∈ {0.001, 0.0001}. The parameter s is not estimated, 和
its actual value is used by all algorithms except the Leaky Integrator.
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
/
e
d
你
n
e
C
哦
A
r
t
我
C
e
–
p
d
/
我
F
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
C
哦
_
A
_
0
1
3
5
2
p
d
.
/
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
Learning in Volatile Environments With the Bayes Factor Surprise
289
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
/
e
d
你
n
e
C
哦
A
r
t
我
C
e
–
p
d
/
我
F
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
C
哦
_
A
_
0
1
3
5
2
p
d
.
/
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
数字 4: Gaussian estimation task: Steady-state performance summary. Dif-
ference between the mean squared error of each algorithm and the optimal
解决方案 (Exact Bayes), 那是, the average of (西德:8)均方误差[ ˆ(西德:3)
t] 随着时间的推移, for all combi-
nations of σ and pc together. For each algorithm we plot the 30 价值观 (5 σ times
6 pc values) of Figure 3 with respect to randomly jittered values in the x-axis. 这
color coding is the same as in Figure 2. The error bars mark the standard error
of the mean across 10 random task instances. The difference between the worst
case of SOR20 and pf20 is significant (p-value = 2.79 × 10−6, two-sample t-test,
10 random seeds for each algorithm). Abbreviations: See the Figure 2 caption.
The Particle Filter pf20, the MP20, and the SOR20 have a performance
closest to that of Exact Bayes, the optimal solution (see Figure 5B). VarSMiLe
is the next in the ranking, with a behavior after a change similar to the gaus-
sian task. pf20 performs better for s > 2, and MP20 performs better for s ≤ 2
(见图 6). For this task, the biologically less plausible SOR20 is the win-
ner in performance, and it behaves most consistently across environmental
−5 for s =
参数. Its worst-case performance is (西德:8)均方误差[ ˆ(西德:3)t] = 8.16 × 10
2 and pc = 0.01, and the worst-case performance for pf20 is (西德:8)均方误差[ ˆ(西德:3)t] =
−12, two-sample
0.0048 for s = 0.25 and pc = 0.005 (p-value = 1.148 × 10
t-test, 10 random seeds for each algorithm). For all the other algorithms
except MP20, the highest deviations from the optimal solution are ob-
served for medium stochasticity levels (see Figures 6B to 6F). 当。。。的时候
290
V. Liakoni, A. Modirshanechi, 瓦. Gerstner, 和 J. Brea
数字 5: Categorical estimation task: Transient performance after changes.
(A) 在每个时间步, the agent sees one out of five possible categories (黑色的
点) drawn from a categorical distribution with parameters pt . Occasional
abrupt changes happen with probability pc and are marked with red lines. 的-
ter each change, a new pt vector is drawn from a Dirichlet distribution with
= 0.01. (乙) Mean squared
stochasticity parameter s. 在这个例子中, s = 1 and pc
error for the estimation of pt at each time step n after an environmental change,
= 0.01 (left panel)
= n] 随着时间的推移; s = 0.14, 个人电脑
那是, the average of MSE[ ˆ(西德:3)
t
= 0.005 (right panel). The shaded area corresponds to the stan-
and s = 5, 个人电脑
dard error of the mean. Abbreviations: pfN: Particle Filtering with N parti-
克莱斯; MPN: Message Passing with N particles; VarSMiLe: Variational SMiLe;
SORN: Stratisfied Optimal Resampling with N particles (Fearnhead & 刘,
2007); SMiLe: Faraji et al. (2018); Leaky: Leaky Integrator; Exact Bayes: Adams
and MacKay (2007).
|Rt
environment is nearly deterministic (例如, s = 0.001 so that the parameter
vectors pt have almost all mass concentrated in one component), or highly
stochastic (例如, s > 1 so that nearly uniform categorical distributions are
likely to be sampled), these algorithms achieve higher performance, 尽管
the Particle Filter is the algorithm that is most resilient to extreme choices of
the stochasticity parameter s. For VarSMiLe in particular, the lowest mean
error is achieved for high s and high pc or low s and low pc.
A summary graph, 与 (西德:8)均方误差[ ˆ(西德:3)t] across all levels of s and pc, is in
数字 7. The algorithms with the lowest “worst case” are SOR20 and pf20.
The top four algorithms SOR20, pf20, MP20, and VarSMiLe are significantly
better than the others (the error bars indicate the standard error of the mean
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
/
e
d
你
n
e
C
哦
A
r
t
我
C
e
–
p
d
/
我
F
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
C
哦
_
A
_
0
1
3
5
2
p
d
.
/
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
Learning in Volatile Environments With the Bayes Factor Surprise
291
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
/
e
d
你
n
e
C
哦
A
r
t
我
C
e
–
p
d
/
我
F
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
C
哦
_
A
_
0
1
3
5
2
p
d
.
/
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
数字 6: Categorical estimation task: Steady-state performance. (A) 意思是
squared error of the Exact Bayes algorithm (IE。, optimal solution) for each com-
bination of environmental parameters s and pc averaged over time. (B–F) Dif-
ference between the mean squared error of each algorithm and the optimal
解决方案 (of panel A), 那是, the average of (西德:8)均方误差[ ˆ(西德:3)
t] 随着时间的推移. The color
bar of panel A applies to these panels as well. Abbreviations: See the Figure 5
caption.
across the 10 random task instances), whereas MP1 and SMiLe have the
largest error with a maximum at 0.53.
2.3.3 Summary of Simulation Results. 总之, our simulation re-
sults of the two tasks collectively suggest that our Particle Filtering (pfN)
and Message Passing (MPN) algorithms achieve a high level of perfor-
曼斯, very close to the one of biologically less plausible algorithms with
更高 (Exact Bayes) and same (SORN) memory demands. 而且, 他们的
behavior is more consistent across tasks. 最后, among the algorithms with
memory demands of one unit, VarSMiLe performs best.
292
V. Liakoni, A. Modirshanechi, 瓦. Gerstner, 和 J. Brea
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
/
e
d
你
n
e
C
哦
A
r
t
我
C
e
–
p
d
/
我
F
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
C
哦
_
A
_
0
1
3
5
2
p
d
.
/
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
数字 7: Categorical estimation task: Steady-state performance summary. Dif-
ference between the mean squared error of each algorithm and the optimal
解决方案 (Exact Bayes), 那是, the average of (西德:8)均方误差[ ˆ(西德:3)
t] 随着时间的推移, for all com-
binations of s and pc together. For each algorithm, we plot the 42 价值观 (7 s
次 6 pc values) of Figure 6 with respect to randomly jittered values in the
x-axis. The color coding is the same as in Figure 5. The error bars mark the stan-
dard error of the mean across 10 random task instances. The difference between
the worst case of SOR20 and pf20 is significant (p-value = 1.148 × 10−12, 二-
sample t-test, 10 random seeds for each algorithm). Abbreviations: See the Fig-
乌尔 5 caption. Note that MP1 and SMiLe are out of bound with a maximum at
0.53.
2.3.4 Robustness against Suboptimal Parameter Choice. In all algorithms we
经过考虑的, the environment’s hyperparameters are assumed to be known.
We can distinguish between two types of hyperparameters in our genera-
tive model: the parameters of the likelihood function (例如, σ in the gaussian
任务), and the pc and the parameters of the conjugate prior (例如, s in the cat-
egorical task). Hyperparameters of the first type can be added to the param-
eter vector θ and be inferred with the same algorithm. 然而, 学习
the second type of hyperparameters is not straightforward. By assuming
that these hyperparameters are learned more slowly than θ , one can fine-
tune them after each n (例如, 10) change points, while change points can be
detected by looking at the particles for the Particle Filter and at the peaks
Learning in Volatile Environments With the Bayes Factor Surprise
293
of surprise values for VarSMiLe. Other approaches to hyperparameter esti-
mation can be found in Doucet and Tadi´c (2003), George and Doss (2017),
Liu and West (2001), and Wilson et al. (2010).
When the hyperparameters are fixed, a mismatch between the assumed
values and the true values is a possible source of errors. 在这个部分, 我们
investigate the robustness of the algorithms to a mismatch between the as-
sumed and the actual probability of change points. 这样做, we first tuned
each algorithm’s parameter for an environment with a change probability
pc and then tested the algorithms in environments with different change
probabilities while keeping the parameter fixed. For each new environment
with a different change probability, we calculated the difference between
the MSE of these fixed parameters and the optimal MSE, 那是, the re-
sulting MSE for the case that the Exact Bayes’ parameter is tuned for the
actual pc.
, 个人电脑] − MSE[ ˆ(西德:3)Opt
更确切地说, if we denote as MSE[ ˆ(西德:3)t; p(西德:5)
, 个人电脑] the MSE of an al-
gorithm with parameters tuned for an environment with p(西德:5)
C, applied
in an environment with pc, we calculated the mean regret, 定义为
均方误差[ ˆ(西德:3)t; p(西德:5)
, 个人电脑], 随着时间的推移; note that the second term is
equal to MSE[ ˆ(西德:3)t; 个人电脑, 个人电脑] when the algorithm Exact Bayes is used for esti-
运动. The lower the values and the flatter the curve of the mean regret,
the better the performance and the robustness of the algorithm in the face of
lacking knowledge of the environment. The slope of the curve indicates the
degree of deviations of the performance as we move away from the opti-
mally tuned parameter. We ran three random (and same for all algorithms)
tasks initializations for each pc level.
C
C
t
C
c levels. For σ = 0.1 和 p(西德:5)
图中 8 we plot the mean regret for each algorithm for the gaussian
task for four pairs of s and p(西德:5)
= 0.04 (见图
8A), the Exact Bayes and the MP20 show the highest robustness (smallest
后悔) and are closely followed by the pf20, VarSMiLe, and Nas12∗ (笔记
the regret’s small range of values). The lower the actual pc, 越高
后悔, but the changes are still very small. The curves for the SMiLe and the
Leaky Integrator are also relatively flat, but the mean regret is much higher.
The SOR20 is the least robust algorithm.
Similar observations can be made for σ = 0.1 和 p(西德:5)
C
= 0.004 (见图-
ure 8B). 在这种情况下, the performance of all algorithms deteriorates strongly
when the actual pc is higher than the assumed one.
然而, for σ = 5 (see Figures 8C and 8D), the ranking of algorithms
变化. The SOR20 is very robust for this level of stochasticity. The pf20
and MP20 perform similarly for pc = 0.04, but for lower p(西德:5)
C, the pf20 is
more robust and the MP20 exhibits high fluctuations in its performance.
The Nas12∗ is quite robust at this σ level. Overall for Exact Bayes, SOR20,
∗
, a mismatch of the assumed pc from the actual
pf20, VarSMiLe, and Nas12
one does not deteriorate the performance dramatically for σ = 5, p(西德:5)
= 0.004
C
(see Figure 8D). The SMiLe and the Leaky Integrator outperform the other
algorithms for higher p(西德:5)
C (see Figure 8C). A potential reason is that
c if pc < p(cid:5)
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
n
e
c
o
a
r
t
i
c
e
-
p
d
/
l
f
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
c
o
_
a
_
0
1
3
5
2
p
d
.
/
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
294
V. Liakoni, A. Modirshanechi, W. Gerstner, and J. Brea
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
n
e
c
o
a
r
t
i
c
e
-
p
d
/
l
f
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
c
o
_
a
_
0
1
3
5
2
p
d
.
/
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
Figure 8: Robustness to mismatch between actual and assumed probability of
changes for the gaussian estimation task. The mean regret is the mean squared
error obtained with assumed change probability p(cid:5)
c minus the mean squared
error obtained with the optimal parameter choice of Exact Bayes for the given
; p(cid:5)
actual pc, that is, the average of the quantity MSE[ ˆ(cid:3)
, pc]
c
over time. A red triangle marks the p(cid:5)
c value each algorithm was tuned for. We
plot the mean regret for the following parameter combinations: A, σ = 0.1 and
p(cid:5)
= 0.004; C, σ = 5 and p(cid:5)
=
c
c
0.004. Abbreviations: See the Figure 2 caption.
= 0.04; B, σ = 0.1 and p(cid:5)
c
= 0.04; D, σ = 5 and p(cid:5)
c
, pc] − MSE[ ˆ(cid:3)Opt
t
t
the optimal behavior for the Leaky Integrator (according to the tuned pa-
rameters) is to constantly integrate new observations into its belief (i.e., to
act like a Perfect Integrator) regardless of the p(cid:5)
c level. This feature makes it
blind to the pc and therefore very robust against the lack of knowledge of it
(see Figure 8C).
In summary, most of the time, the mean regret for Exact Bayes and MP20
is less than the mean regret for pf20 and VarSMiLe. However, the variability
Learning in Volatile Environments With the Bayes Factor Surprise
295
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
n
e
c
o
a
r
t
i
c
e
-
p
d
/
l
f
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
c
o
_
a
_
0
1
3
5
2
p
d
.
/
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
Figure 9: Robustness to mismatch between actual and assumed probability of
changes for the categorical estimation task. The mean regret is the mean squared
error obtained with assumed change probability p(cid:5)
c minus the mean squared
error obtained with the optimal parameter choice of Exact Bayes for the given
; p(cid:5)
actual pc, that is, the average of the quantity MSE[ ˆ(cid:3)
, pc]
c
over time. A red triangle marks the p(cid:5)
c value each algorithm was tuned for. We
plot the mean regret for the following parameter combinations: A, s = 0.14 and
p(cid:5)
= 0.004; C, s = 5 and p(cid:5)
=
c
c
0.004. Abbreviations: See the Figure 5 caption.
= 0.04; B, s = 0.14 and p(cid:5)
c
= 0.04; D, s = 5 and p(cid:5)
c
, pc] − MSE[ ˆ(cid:3)Opt
t
t
in the mean regret for pf20 and VarSMiLe is smaller, and their curves are
flatter across pc levels, which makes their performance more predictable.
The results for the categorical estimation task are similar to those of the
gaussian task, with the difference that the SOR20 is very robust for this case
(see Figure 9).
2.4 Experimental Prediction. It has been experimentally shown that
indicators statistically
some important behavioral and physiological
296
V. Liakoni, A. Modirshanechi, W. Gerstner, and J. Brea
correlate with a measure of surprise or a prediction error. Examples of such
indicators are the pupil diameter (Joshi & Gold, 2020; Nassar et al., 2012;
Preuschoff et al., 2011), the amplitude of the P300, N400, and MMN compo-
nents of EEG (Kopp & Lange, 2013; Lieder, Daunizeau, Garrido, Friston, &
Stephan, 2013; Mars et al., 2008; Meyniel, Maheu, & Dehaene, 2016; Modir-
shanechi et al., 2019; Musiolek, Blankenburg, Ostwald, & Rabovsky, 2019;
Ostwald et al., 2012), the amplitude of MEG in specific time windows (Ma-
heu et al., 2019), BOLD responses in fMRI (Konovalov & Krajbich, 2018;
Loued-Khenissi, Pfeuffer, Einhäuser, & Preuschoff, 2020), and reaction time
(Huettel, Mack, & McCarthy, 2002; Meyniel et al., 2016). The surprise mea-
sure is usually the negative log probability of the observation, known as
Shannon Surprise (Shannon, 1948), and denoted here as SSh. However, as
we show in this section, as long as there is an uninformative prior over
observations, Shannon Surprise SSh is just an invertible function of our
modulated adaptation rate γ and hence an invertible function of the Bayes
Factor Surprise SBF. Thus, based on the results of previous works (Meyniel
et al., 2016; Modirshanechi et al., 2019; Nassar et al., 2012, 2010; Ostwald
et al., 2012), which always used uninformative priors, one cannot deter-
mine whether the physiological and behavioral indicators we have noted
correlate with SSh or SBF.
In this section, we first investigate the theoretical differences between
the Bayes Factor Surprise SBF and Shannon Surprise SSh. Then, based on
their observed differences, we formulate two experimentally testable pre-
dictions, with a detailed experimental protocol. Our predictions make it
possible to discriminate between the two measures of surprise and deter-
mine whether physiological or behavioral measurements are signatures of
SBF or of SSh.
2.4.1 Theoretical Difference between SBF and SSh. Shannon Surprise (Shan-
non, 1948) is defined as
SSh(yt+1
; π (t)) = −log
(cid:7)
(cid:8)
|y1:t )
,
P(yt+1
(2.24)
(cid:7)
; π (t)) = −log
|y1:t ), one should know the structure of
where for computing P(yt+1
the generative model. For the generative model of Figure 1A, we
find SSh(yt+1
. While the
Bayes Factor Surprise SBF depends on a ratio between the probability of the
new observation under the prior and the current beliefs, Shannon Surprise
depends on a weighted sum of these probabilities. Interestingly, it is pos-
sible to express (see section 4 for derivation) the adaptation rate γt+1 as a
function of the “difference in Shannon Surprise,”
; π (t)) + pcP(yt+1
(1 − pc)P(yt+1
(cid:8)
; π (0))
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
n
e
c
o
a
r
t
i
c
e
-
p
d
/
l
f
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
c
o
_
a
_
0
1
3
5
2
p
d
.
/
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
(cid:8)SSh(yt+1
(cid:8)
; π (t), π (0))
,
(cid:7)
= pcexp
γt+1
where (cid:8)SSh(yt+1
; π (t), π (0)) = SSh(yt+1
; π (t)) − SSh(yt+1
; π (0)),
(2.25)
Learning in Volatile Environments With the Bayes Factor Surprise
297
(cid:7)
S(t+1)
(cid:8)
, m
BF
= γ
where γt+1
depends on the Bayes Factor Surprise and the
saturation parameter m (cf. equation 2.9). Equation 2.25 shows that the
modulated adaptation rate is not just a function of Shannon Surprise upon
observing yt+1, but a function of the difference between the Shannon Surprise
of this observation under the current and the prior beliefs. Note that if the
prior is uninformative then the second term in equation 2.25 is a constant
with respect to observations, and γt+1 is an invertible function of SSh, thus
indistinguishable from SSh. We next exploit differences between SBF and SSh
to formulate our experimentally testable predictions.
2.4.2 Experimental Protocol. Consider the variant of the gaussian task of
Nassar et al. (2012, 2010), which we used in our simulations: PY (y|θ ) =
N (y; θ , σ 2) and π (0)(θ ) = N (θ ; 0, 1). Human subjects are asked to predict
the next observation yt+1 given what they have observed so far, that is, y1:t.
The experimental procedure is as follows:
1. Fix the hyperparameters σ 2 and pc.
2. At each time t, show the observation yt (produced in the aforemen-
tioned way) to the subject, and measure a physiological or behav-
ioral indicator Mt, for example, pupil diameter (Nassar et al., 2012,
2010).
3. At each time t, after observing yt, ask the subject to predict the next
observation ˆyt+1 and his or her confidence Ct about that prediction.
Note that the only difference between our task and the task of Nassar et al.
(2012, 2010) is the choice of prior for θ (i.e., gaussian instead of uniform).
The assumption is that according to the previous studies, there is a positive
correlation between Mt and a measure of surprise.
2.4.3 Prediction 1. Based on the results of Nassar et al. (2012, 2010), in
such a gaussian task, the best fit for subjects’ prediction ˆyt+1 is ˆθt, and the
confidence Ct is a monotonic function of the standard deviation of ˆθt, de-
ˆsigmat. In order to formalize our experimental prediction, we de-
noted as
fine, at time t, the prediction error as δt = yt − ˆyt and the “sign bias” as
st = sign(δt ˆyt ). The variable st is a crucial variable for our analysis. It shows
whether the prediction ˆyt is an overestimation in absolute value (st = +1)
or an underestimation in absolute value (st = −1). Figure 10A shows a
schematic for the case that both the current and prior beliefs are gaus-
sian distributions. The two observations indicated by dashed lines have the
same absolute error |δt| but differ in the sign bias s.
Given an absolute prediction value ˆy > 0, an absolute prediction error
δ > 0, a confidence value C > 0, and a sign bias s ∈ {−1, 1}, we can com-
pute the average of Mt over time for the time points with | ˆyt| ≈ ˆy, |δt| ≈ δ,
Ct ≈ C, and st = s, which we denote as ¯M1( ˆy, δ, s, C); the index 1 stands for
experimental prediction 1. The approximation notation ≈ is used for
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
/
e
d
你
n
e
C
哦
A
r
t
我
C
e
–
p
d
/
我
F
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
C
哦
_
A
_
0
1
3
5
2
p
d
.
/
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
298
V. Liakoni, A. Modirshanechi, 瓦. Gerstner, 和 J. Brea
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
/
e
d
你
n
e
C
哦
A
r
t
我
C
e
–
p
d
/
我
F
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
C
哦
_
A
_
0
1
3
5
2
p
d
.
/
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
数字 10: Experimental prediction 1. (A) Schematic of the task for the case
of a gaussian belief. The distributions of yt+1 under the prior belief π (0) 和
the current belief π (t) are shown by black and red curves, 分别. 二
possible observations with equal absolute prediction error δ but opposite sign
bias s are indicated by dashed lines. The two observations are equally prob-
able under π (t) but not under π (0). SBF is computed as the ratio between the
red and black dots for a given observation, whereas SSh is a function of the
weighted sum of the two. This phenomenon is the basis of our experimental
prediction. (乙) The average surprise values ¯SSh( ˆθ = 1, δ, s = ±1, σ
= 0.5) 和
¯SBF( ˆθ = 1, δ, s = ±1, σ
= 0.5) 超过 20 subjects (每个都有 500 observations) 是
shown for two different learning algorithms (Nas12∗ and pf20). The mean ¯SBF
is higher for negative sign bias (marked in blue) than for positive sign bias
(marked in orange). The opposite is observed for the mean ¯SSh. This effect in-
creases with increasing values of prediction error δ. The shaded area corre-
sponds to the standard error of the mean. The experimental task is the same
= 0.1
as the gaussian task we used in the previous section, with σ = 0.5 and pc
(参见部分 4 欲了解详情).
C
C
Learning in Volatile Environments With the Bayes Factor Surprise
299
continuous variables instead of equality due to practical limitations (IE。,
for obtaining adequate number of samples for averaging). Note that for our
theoretical proofs, we use equality, but in our simulation, we include the
practical limitations of a real experiment, 因此, use an approximation.
The formal definitions can be found in section 4. 值得一提的是
the quantity ¯M1( ˆy, δ, s, C) is model independent; its calculation does not re-
quire any assumption on the learning algorithm the subject may employ.
¯M1( ˆy, δ, s, C) reflects SSh or SBF,
Depending on whether the measurement
its relationship to the defined four variables ( ˆy, δ, s, C) is qualitatively and
quantitatively different.
In order to prove and illustrate our prediction, we consider each subject
as an agent enabled with one of the learning algorithms that we discussed.
Similar to the above, given an absolute prediction ˆθ > 0 (相应的
to the subjects’ absolute prediction ˆy), an absolute prediction error δ > 0,
a standard deviation σ
C (corresponding to the subjects’ confidence value
C), and a sign bias s ∈ {−1, 1}, we can compute the average Shannon Sur-
prise SSh(yt; ˆπ (t−1)) and the average Bayes Factor Surprise SBF(yt; ˆπ (t−1))
随着时间的推移, for the time points with | ˆθt−1
C, and st = s,
which we denote as ¯SSh( ˆθ , δ, s, σ
C) 分别. 我们可以
show theoretically (参见部分 4) and in simulations (see Figure 10B and
部分 4) that for any value of ˆθ , δ, and σ
C) >
¯SSh( ˆθ , δ, s = −1, σ
C) for the Shannon Surprise, and exactly the opposite re-
关系, ¯SBF( ˆθ , δ, s = +1, σ
C), for the Bayes Factor Sur-
prise. 而且, this effect increases with increasing δ.
C, we have ¯SSh( ˆθ, δ, s = +1, σ
C) < ¯SBF( ˆθ , δ, s = −1, σ | ≈ ˆθ , |δt| ≈ δ, ˆσt ≈ σ C) and ¯SBF( ˆθ , δ, s, σ It should be noted that such an effect is due to the essential difference of SSh and SBF in using the prior belief π (0)(θ ). Our experimental predic- tion is theoretically provable for the cases that each subject’s belief ˆπ (t) is a ∗ gaussian distribution, which is the case if they employ VarSMiLe, Nas10 , ∗ Nas12 , pf1, MP1, or Leaky Integrator as their learning rule (see section 4). For the cases that different learning rules (e.g., pf20) are used, where the posterior belief is a weighted sum of gaussians, the theoretical analysis is more complicated, but our simulations show the same results (see Figure 10B and section 4). Therefore, independent of the learning rule, we have the same experimental prediction on the manifestation of different surprise measures on physiological signals, such as pupil dilation. Our first exper- imental prediction can be summarized as the set of hypotheses shown in Table 1. 2.4.4 Prediction 2. Our second prediction follows the same experimen- tal procedure as the one for the first prediction. The main difference is that for the second prediction, we need to fit a model to the experimental data. Given one of the learning algorithms, the fitting procedure can be done by tuning the free parameters of the algorithm with the goal of minimiz- ing the mean squared error between the model’s prediction ˆθt and a sub- ject’s prediction ˆyt+1 (similar to Nassar et al., 2012, 2010) or with the goal of l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . / e d u n e c o a r t i c e - p d / l f / / / / 3 3 2 2 6 9 1 8 9 6 8 0 8 n e c o _ a _ 0 1 3 5 2 p d . / f b y g u e s t t o n 0 8 S e p e m b e r 2 0 2 3 300 V. Liakoni, A. Modirshanechi, W. Gerstner, and J. Brea Table 1: Experimental Hypotheses and Predictions 1. Hypothesis Prediction The indicator reflects SBF The indicator reflects SSh The prior is not used for inference (cid:8) ¯M1( ˆθ, δ, C) = 0 (cid:8) ¯M1( ˆθ, δ, C) < 0 and (cid:8) ¯M1( ˆθ, δ, C) > 0 和
∂(西德:8) ¯M1 ( ˆθ,δ,C)
∂δ
∂(西德:8) ¯M1 ( ˆθ,δ,C)
∂δ
< 0 > 0
笔记: (西德:8) ¯M1( ˆθ, δ, C) stands for ¯M1( ˆθ, δ, s = +1, C) − ¯M1( ˆθ, δ, s = −1, C).
maximizing the likelihood of subject’s prediction ˆπ (t)( ˆyt+1). Our prediction
is independent of the learning algorithm, but in an actual experiment, 我们
recommend using model selection to find the model that fits the human
data best.
Having a fitted model, we can compute the probabilities P(yt+1
; ˆπ (t)) 和
; ˆπ (0)). For the case that these probabilities are equal, 磷(yt+1
; ˆπ (t)) =
磷(yt+1
; ˆπ (0)) = p, the Bayes Factor Surprise SBF is equal to 1, 独立的
磷(yt+1
of the value of p (参见方程 2.7). 然而, the Shannon Surprise SSh is
equal to − log p and varies with p. Figure 11A shows a schematic for the case
that both current and prior beliefs are gaussian distributions. Two cases for
; ˆπ (0)) = p, for two different p values,
; ˆπ (t)) =P(yt+1
which we have P(yt+1
are marked by black dots at the intersections of the curves.
Given a probability p > 0, we can compute the average of Mt over time
; ˆπ (0)) ≈ p, which we de-
; ˆπ (t)) ≈ p and P(yt+1
for the time points with P(yt+1
note as ¯M2(p); the index 2 stands for experimental prediction 2. Analogous
to the first prediction, the approximation notation ≈ is used due to practi-
cal limitations. 然后, if ¯M2(p) is independent of p, its behavior is consistent
with SBF, whereas if it decreases by increasing p, it can be a signature of
SSh. Our second experimental prediction can be summarized as the two hy-
potheses shown in Table 2. Note that in contrast to our first prediction, 和
the assumption that the standard deviation of the prior belief is fitted us-
ing the behavioral data, we do not consider the hypothesis that the prior is
not used for inference, because this is indistinguishable from a very large
variance of the prior belief.
In order to illustrate the possible results and the feasibility of the exper-
iment, we ran a simulation and computed ¯SBF(p) and ¯SSh(p) for the time
; ˆπ (0)) ≈ p (参见部分 4 欲了解详情).
; ˆπ (t)) ≈ p and P(yt+1
points with P(yt+1
The results of the simulation are shown in Figure 11B.
3 讨论
We have shown that performing exact Bayesian inference on a gen-
erative world model naturally leads to a definition of surprise and a
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
/
e
d
你
n
e
C
哦
A
r
t
我
C
e
–
p
d
/
我
F
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
C
哦
_
A
_
0
1
3
5
2
p
d
.
/
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
Learning in Volatile Environments With the Bayes Factor Surprise
301
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
/
e
d
你
n
e
C
哦
A
r
t
我
C
e
–
p
d
/
我
F
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
C
哦
_
A
_
0
1
3
5
2
p
d
.
/
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
数字 11: Experimental prediction 2. (A) Schematic of the task for the case of
a gaussian belief. The probability distribution of observations under the prior
belief is shown by the solid black curve. Two different possible current beliefs
(determined by the letters A and B) are shown by dashed red curves. 国际米兰-
sections of the dashed red curves with the prior belief determine observations
whose SBF is same and equal to one, but their SSh is a function of their prob-
abilities under the prior belief p. (乙) The average surprise values ¯SSh(p) 和
¯SBF(p) 超过 20 subjects (每个都有 500 observations) are shown for two different
learning algorithms (Nas12∗ and pf20). The mean SBF is constant (equal to 1)
and independent of p, whereas the mean SSh is a decreasing function of p. 这
shaded area corresponds to the standard error of the mean. 实验的
task is the same as the gaussian task we used in the previous section. Obser-
vations yt are drawn from a gaussian distribution with σ = 0.5, whose mean
changes with change point probability pc
= 0.1 (参见部分 4 欲了解详情).
surprise-modulated adaptation rate. We have proposed three approximate
算法 (VarSMiLe, MPN, and pfN) for learning in nonstationary envi-
罗蒙兹, which all exhibit the surprise-modulated adaptation rate of the
302
V. Liakoni, A. Modirshanechi, 瓦. Gerstner, 和 J. Brea
桌子 2: Experimental Hypotheses and Predictions 2.
Hypothesis
The indicator reflects SBF
The indicator reflects SSh
预言
∂ ¯M2 (p)
∂ p
∂ ¯M2 (p)
∂ p
= 0
< 0
exact Bayesian approach and are biologically plausible. Empirically we ob-
served that our algorithms achieve levels of performance comparable to
approximate Bayesian methods with higher memory demands (Adams &
MacKay, 2007) and are more resilient across different environments com-
pared to methods with similar memory demands (Faraji et al., 2018; Fearn-
head & Liu, 2007; Nassar et al., 2012, 2010).
Learning in a volatile environment has been studied for a long time in
the fields of Bayesian learning, neuroscience, and signal processing. In the
following, we discuss the biological relevance of our work, and we briefly
review some of the previously developed algorithms, with particular focus
on the ones that have studied environments that can be modeled with a
generative model similar to the one in Figure 1. We then discuss further our
results and propose directions for future work on surprise-based learning.
3.1 Biological Interpretation. Humans are able to quickly adapt to
changes (Behrens et al., 2007; Nassar et al., 2012, 2010), but human behav-
ior is also often observed to be suboptimal, compared to the normative ap-
proach of exact Bayesian inference (Glaze et al., 2015; Mathys, Daunizeau,
Friston, & Stephan, 2011; Nassar et al., 2010; Prat-Carrabin, Wilson, Cohen,
& Da Silveira, 2020; Wilson, Nassar, & Gold, 2013). In general, biological
agents have limited resources and possibly inaccurate assumptions about
hyperparameters, yielding suboptimal behavior, as we also see with our
algorithms whose accuracies degrade with a suboptimal choice of hyper-
parameters. Performance also deteriorates with a decreasing number of
particles in the sampling-based algorithms, which might be another pos-
sible explanation of suboptimal human behavior. Previously, Particle Fil-
tering has been shown to explain the behavior of human subjects in chang-
ing environments: Daw and Courville (2008) use a single particle, Brown
and Steyvers (2009) use a simple heuristic form of particle filtering based
on direct simulation, Findling et al. (2019) combine Particle Filtering with a
noisy inference, and Prat-Carrabin et al. (2020) use it for a task with tempo-
ral structure.
At the level of neuronal implementation, we do not propose a specific
suggestion. However, there are several hypotheses about neural implemen-
tations of related particle filters (Huang & Rao, 2014; Kutschireiter et al.,
2017; Legenstein & Maass, 2014; Shi & Griffiths, 2009) on which, a neural
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
n
e
c
o
a
r
t
i
c
e
-
p
d
/
l
f
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
c
o
_
a
_
0
1
3
5
2
p
d
.
/
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
Learning in Volatile Environments With the Bayes Factor Surprise
303
model of pfN and—its greedy version—MPN could be based. In a similar
spirit, the updating scheme of Variational SMiLe may be implemented in
biological neural networks (for distributions in the exponential family).
Our theoretical framework for modulation of learning by the Bayes
Factor Surprise SBF is related to the body of literature on neo-Hebbian
three-factor learning rules (Frémaux & Gerstner, 2016; Gerstner et al., 2018;
Lisman et al., 2011), where a third factor indicating reward or surprise en-
ables or modulates a synaptic change or a belief update (Yu, 2012; Yu &
Dayan, 2005). We have shown how Bayesian or approximate Bayesian in-
ference naturally leads to such a third factor that modulates learning via
the surprise modulated adaptation rate γ (SBF
, m). This may offer novel in-
terpretations of behavioral and neurophysiological data and help in under-
standing how three-factor learning computations may be implemented in
the brain.
3.2 Related Work.
3.2.1 Exact Bayesian Inference. As already described in section 2.2.2, for
the generative model in Figure 1, it is possible to find an exact online
Bayesian update of the belief using a message-passing algorithm (Adams
& MacKay, 2007). The space and time complexity of the algorithm increases
linearly with t, which makes it unsuitable for an online learning setting.
However, approximations like dropping messages below a certain thresh-
old (Adams & MacKay, 2007) or stratified resampling (Fearnhead & Liu,
2007) allow reducing the computational complexity. The former has a vari-
able number of particles in time, and the latter needs solving a complicated
nonlinear equation at each time step in order to reduce the number of par-
ticles to N (called SORN in section 2).
Our message-passing algorithm with a finite number of particles (mes-
sages) N (MPN; see algorithm 3) is closely related to these algorithms and
can be seen as a biologically more plausible variant of the other two. All
three algorithms have the same update rules given by equations 2.19 and
2.18. Hence the algorithms of both Adams and MacKay (2007) and Fearn-
head and Liu (2007) have the same surprise modulation as our MPN. The
difference lies in their approaches to eliminate less “important” particles.
In the literature of switching state-space models (Barber, 2012), the gen-
erative models of the kind in Figure 1 are known as “reset models,” and the
message-passing algorithm of Adams and MacKay (2007) is known to be
the standard algorithm for inference over these models (Barber, 2012). See
Barber (2006, 2012), and Ghahramani & Hinton (2000) for other variations of
switching state-space models and examples of approximate inference over
them.
3.2.2 Leaky Integration and Variations of Delta Rules. In order to esti-
mate some statistics, leaky integration of new observations is a particularly
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
n
e
c
o
a
r
t
i
c
e
-
p
d
/
l
f
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
c
o
_
a
_
0
1
3
5
2
p
d
.
/
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
304
V. Liakoni, A. Modirshanechi, W. Gerstner, and J. Brea
simple form of a trade-off between integrating and forgetting. After a tran-
sient phase, the update of a leaky integrator takes the form of a delta rule
that can be seen as an approximation of exact Bayesian updates (Heilbron
& Meyniel, 2019; Meyniel et al., 2016; Ryali et al., 2018; Yu & Cohen, 2009).
This update rule was found to be biologically plausible and consistent with
human behavioral data (Meyniel et al., 2016; Yu & Cohen, 2009). However,
Behrens et al. (2007) and Heilbron and Meyniel (2019) demonstrated that in
some situations, the exact Bayesian model is significantly better than leaky
integration in explaining human behavior. The inflexibility of leaky integra-
tion with a single, constant leak parameter can be overcome by a weighted
combination of multiple leaky integrators (Wilson et al., 2013), where the
weights are updated in a similar fashion as in the exact online methods
(Adams & MacKay, 2007; Fearnhead & Liu, 2007), or by considering an
adaptive leak parameter (Nassar et al., 2012, 2010). We have shown that
the two algorithms of Nassar et al. (2012, 2010) can be generalized to gaus-
sian prior beliefs (Nas10∗
∗
). Our results show that these algo-
and Nas12
rithms also inherit the surprise modulation of the exact Bayesian inference.
Our surprise-dependent adaptation rate γ can be interpreted as a surprise-
modulated leak parameter.
3.2.3 Other Approaches. Learning in the presence of abrupt changes has
also been considered without explicit assumptions about the underlying
generative model. One approach uses a surprise-modulated adaptation
rate (Faraji et al., 2018) similar to equation 2.9. The Surprise-Minimization
Learning (SMiLe) algorithm of Faraji et al. (2018) has an updating rule sim-
ilar to the one of VarSMiLe (see equations 2.14 and 2.15). The adaptation
rate modulation, however, is based on the Confidence Corrected Surprise
(Faraji et al., 2018) rather than the Bayes Factor Surprise, and the trade-off in
its update rule is between resetting and staying with the latest belief rather
than between resetting and integrating (see section 4).
Other approaches use different generative models, such as conditional
sampling of the parameters also when there is a change (Glaze et al., 2015;
Yu & Dayan, 2005), a deeper hierarchy without fixed change probability pc
(Wilson et al., 2010), or drift in the parameters (Gershman, Radulescu, Nor-
man, & Niv, 2014; Mathys et al., 2011). A recent work shows that inference on
a simpler version the generative model of Figure 1, with no change points
but with a noisy inference style, can explain human behavior well even
when the true generative model of the environment is different and more
complicated (Findling et al., 2019). They develop a heuristic approach to
add noise in the inference process of a Particle Filter. Their algorithm can be
interpreted as a surprise-modulated Particle Filter, where the added noise
scales with a measure of surprise—conceptually equivalent to Bayesian sur-
prise (Itti & Baldi, 2006; Schmidhuber, 2010; Storck, Hochreiter, & Schmid-
huber, 1995). Moreover, another recent work (Prat-Carrabin et al., 2020)
shows that approximate sampling algorithms (like Particle Filtering) can
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
n
e
c
o
a
r
t
i
c
e
-
p
d
/
l
f
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
c
o
_
a
_
0
1
3
5
2
p
d
.
/
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
Learning in Volatile Environments With the Bayes Factor Surprise
305
explain human behavior better than their alternatives in tasks closely re-
lated to the generative model of Figure 1. The signal processing literature
provides further methods to address the problem of learning in nonsta-
tionary environments with abrupt changes; see Aminikhanghahi and Cook
(2017) for a review, and Cummings et al. (2018), Lin et al. (2017), Masegosa
et al. (2017), and Özkan et al. (2013) for a few recent examples.
3.3 Surprise Modulation as a Generic Phenomenon. Learning rate
modulation similar to the one in equation 2.9 has been previously proposed
in the neuroscience literature with either heuristic arguments (Faraji et al.,
2018) or Bayesian arguments for a particular experimental task, for exam-
ple, when samples are drawn from a gaussian distribution (Nassar et al.,
2012, 2010). The fact that the same form of modulation is at the heart of
Bayesian inference for our relatively general generative model, that it is de-
rived without any further assumptions, and is not a priori defined, is in our
view an important contribution to the field of adaptive learning algorithms
in computational neuroscience.
Furthermore, the results of our three approximate methods (Particle Fil-
tering, Variational SMiLe, and Message Passing with a fixed N number of
messages) as well as some previously developed ones (Adams & MacKay,
2007; Fearnhead & Liu, 2007; Nassar et al., 2012, 2010) demonstrate that the
surprise-based modulation of the learning rate is a generic phenomenon.
Therefore, regardless of whether the brain uses Bayesian inference or an
approximate algorithm (Bogacz, 2017, 2019; Findling et al., 2019; Friston,
2010; Gershman, 2019; Gershman et al., 2014; Mathys et al., 2011; Nassar
et al., 2012, 2010; Prat-Carrabin et al., 2020), the notion of Bayes Factor Sur-
prise and the way it modulates learning (see equations 2.12 and 2.9) look
generic.
The generality of the way surprise should modulate learning depends on
an agent’s inductive biases about its environment and is directly associated
with the assumed generative model of the world. The generative model we
considered in this work involves abrupt changes. However, one can think of
other realistic examples, where an improbable observation does not indicate
a persistent change but a singular event or an outlier, similar to d’Acremont
and Bossaerts (2016) and Nassar, Bruckner, and Frank (2019). In such sit-
uations, the belief should not be changed, and surprise should attenuate
learning rather than accelerate it. Interestingly, we can show that exact and
approximate Bayesian inference on such a generative model naturally leads
to a surprise-modulated adaptation rate γ (SBF
, m), with the same definition
of SBF, where the trade-off is not between integrating and resetting, but be-
tween integrating and ignoring the new observation.1
1
Liakoni, V., Lehmann, M. P., Modirshanechi, A., Brea, J., Herzog, M. H., Gerstner,
W., & Preuschoff, K. Dissociating brain regions encoding reward prediction error and surprise.
Manuscript in preparation.
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
n
e
c
o
a
r
t
i
c
e
-
p
d
/
l
f
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
c
o
_
a
_
0
1
3
5
2
p
d
.
/
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
306
V. Liakoni, A. Modirshanechi, W. Gerstner, and J. Brea
An aspect that the generative model we considered does not capture is
the potential return to a previous state of the environment rather than a
change to a completely new situation. If in our example of Figure 1B, the
bridge with the shortest path is temporarily closed for repairs, your friend
would again have to take the longer detour; therefore, her arrival times will
return to their previous values (i.e. increase). In such cases, an agent should
infer whether the surprising observation stems from a new hidden state or
from an old state stored in memory. Relevant generative models have been
studied in Fox, Sudderth, Jordan, and Willsky (2011), Gershman, Monfils,
Norman, and Niv (2017), Gershman et al. (2014) and are out of the scope of
this article.
3.4 Bayes Factor Surprise as a Novel Measure of Surprise. In view of
a potential application in the neurosciences, a definition of surprise should
reflect how unexpected an event is and should modulate learning. Surpris-
ing events indicate that our belief is far from the real world and suggest
updating our model of the world, or, for a large surprise, simply forgetting
it. Forgetting is the same as returning to the prior belief. However, an ob-
servation yt+1 can be unexpected under both the prior π (0) and the current
beliefs π (t). In these situations, it is not obvious whether forgetting helps.
Therefore, the modulation between forgetting or not should be based on
a comparison between the probability of an event under the current belief
P(yt+1
; π (t)) and its probability under the prior belief P(yt+1
The definition of the Bayes Factor Surprise SBF as the ratio of P(yt+1
; π (t))
; π (0)) exploits this insight. The Bayes Factor Surprise appears as
and P(yt+1
a modulation factor in the recursive form of the exact Bayesian update rule
for a hierarchical generative model of the environment. When two events
are equally probable under the prior belief, the one that is less expected
under the current belief is more surprising, satisfying the first property. At
the same time, when two events are equally probable under the current be-
lief, the one that is more expected under the prior belief is more surprising,
signaling that forgetting may be beneficial.
; π (0)).
SBF can be written (using equation 2.6) in a more explicit way as
SBF(yt+1
; π (t)) = P(yt+1
P(yt+1
; π (0))
; π (t))
=
(cid:9)
PY (yt+1
PY (yt+1
(cid:9)
(cid:10)
|(cid:3))
|(cid:3))
(cid:10) .
Eπ (0)
Eπ (t)
(3.1)
Note that the definition by itself is independent of the specific form of the
generative model. In other words, even in the cases where data are gen-
erated with another generative model (e.g., the real world), SBF could be
a candidate surprise measure in order to interpret brain activity or pupil
dilation.
We formally discussed the connections between the Bayes Factor Sur-
prise and Shannon Surprise (Shannon, 1948) and showed that they are
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
n
e
c
o
a
r
t
i
c
e
-
p
d
/
l
f
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
c
o
_
a
_
0
1
3
5
2
p
d
.
/
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
Learning in Volatile Environments With the Bayes Factor Surprise
307
closely linked. We showed that the modulated adaptation rate (γ ) used in
(approximate) Bayesian inference is a function of the difference between
the Shannon Surprise under the current and the prior beliefs, but cannot be
expressed solely by the Shannon Surprise under the current one. Our for-
mal comparisons between these two different measures of surprise lead to
specific experimentally testable predictions.
The Bayesian Surprise SBa (Itti & Baldi, 2006; Schmidhuber, 2010; Storck
et al., 1995) and the Confidence Corrected Surprise SCC (Faraji et al., 2018)
are two other measures of surprise in neuroscience. The learning modula-
tion derived in our generative model cannot be expressed as a function of
SBa and SCC. However, one can hypothesize that SBa is computed after the
update of the belief to measure the information gain of the observed event
and is therefore not a good candidate for online learning modulation. The
Confidence Corrected Surprise SCC takes into account the shape of the belief
and therefore includes the effects of confidence, but it does not consider any
¯M1( ˆθ , δ, s = +1, C) =
information about the prior belief. Hence, a result of
¯M1( ˆθ , δ, s = −1, C) in our first experimental prediction would be consistent
with the corresponding behavioral or physiological indicator reflecting the
SCC.
3.5 Difference in Shannon Surprise, an Alternative Perspective. Fol-
lowing our formal comparison in section 2.4, SBF can be expressed as a de-
terministic function of the difference in Shannon Surprise as
SBF
= (1 − pc)e(cid:8)SSh
1 − pce(cid:8)SSh
.
(3.2)
All of our theoretical results can be rewritten by replacing SBF with this
function of (cid:8)SSh. Moreover, because there is a one-to-one mapping between
SBF and (cid:8)SSh, from a systemic point of view, it is not possible to specify
whether the brain computes the former or the latter by analysis of behav-
ioral data and biological signals. This suggests an alternative interpretation
of surprise-modulated learning as an approximation of Bayesian inference:
What the brain computes and perceives as surprise or prediction error may
be Shannon Surprise, but the modulating factor in a three-factor synaptic
plasticity rule (Frémaux & Gerstner, 2016; Gerstner et al., 2018; Lisman et al.,
2011) may be implemented by comparing the Shannon Surprise values un-
der the current and the prior beliefs.
3.6 Future Directions. A natural continuation of our study is to test our
experimental predictions in human behavior and physiological signals in
order to investigate which measures of surprise are used by the brain. In
a similar direction, our approximate learning algorithms can be evaluated
on human behavioral data from experiments that use a similar generative
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
n
e
c
o
a
r
t
i
c
e
-
p
d
/
l
f
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
c
o
_
a
_
0
1
3
5
2
p
d
.
/
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
308
V. Liakoni, A. Modirshanechi, W. Gerstner, and J. Brea
model (Behrens et al., 2007; Glaze et al., 2015; Heilbron & Meyniel, 2019;
Nassar et al., 2012, 2010; Wilson et al., 2013; Yu & Dayan, 2005) in order to
assess if our proposed algorithms achieve similar or better performance in
explaining data.
Finally, our methods can potentially be applied to model-based rein-
forcement learning in nonstationary environments. In recent years, there
has been growing interest in adaptive or continually learning agents in
changing environments in the form of continual learning and meta-learning
(Lomonaco, Desai, Culurciello, & Maltoni, 2019; Traoré et al., 2019). Many
continual learning model-based approaches make use of some procedure to
detect changes (Lomonaco et al., 2019; Nagabandi et al., 2018). Integrating
SBF and a learning rate γ (SBF) into a reinforcement learning agent would
be an interesting future direction.
4 Methods
4.1 Proof of the proposition. By definition,
π (t+1)(θ ) ≡ P((cid:3)t+1
= θ |y1:t+1).
(4.1)
We exploit the Markov property of the generative model in equations 2.2 to
2.4, condition on the fixed past y1:t and rewrite
π (t+1)(θ ) = PY (yt+1
|θ )P((cid:3)t+1
|y1:t )
P(yt+1
= θ |y1:t )
.
(4.2)
By marginalization over the hidden state Ct+1, the second factor in the nu-
merator of equation 4.2 can be written as
P((cid:3)t+1
= θ |y1:t ) = (1 − pc)π (t)(θ ) + pcπ (0)(θ ).
(4.3)
The denominator in equation 4.2 can be written as
P(yt+1
|y1:t ) =
(cid:4)
PY (yt+1
(cid:4)
|θ )P((cid:3)t+1
= θ |y1:t )dθ
(cid:4)
= (1 − pc)
PY (yt+1
|θ )π (t)(θ )dθ + pc
PY (yt+1
|θ )π (0)(θ )dθ
= (1 − pc)P(yt+1
; π (t)) + pcP(yt+1
; π (0)),
(4.4)
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
n
e
c
o
a
r
t
i
c
e
-
p
d
/
l
f
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
c
o
_
a
_
0
1
3
5
2
p
d
.
/
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
Learning in Volatile Environments With the Bayes Factor Surprise
309
where we used the definition in equation 2.6. Using these two expanded
forms, equation 4.2 can be rewritten as
π (t+1)(θ ) =
(cid:12)
|θ )
PY (yt+1
(1 − pc)P(yt+1
(cid:13)
(1 − pc)π (t)(θ ) + pcπ (0)(θ )
; π (0))
; π (t)) + pcP(yt+1
.
(4.5)
We define P(θ |yt+1) as the posterior given a change in the environment as
P(θ |yt+1) = PY (yt+1
P(yt+1
|θ )π (0)(θ )
; π (0))
.
Then, we can write equation 4.5 as
(4.6)
(θ ) + pcP(yt+1
; π (t)) + pcP(yt+1
; π (0))P(θ |yt+1)
; π (0))
π (t+1)(θ ) = (1 − pc)P(yt+1
; π (t))π (t+1)
B
(1 − pc)P(yt+1
;π (0) )
;π (t) ) P(θ |yt+1)
;π (0) )
;π (t) )
P(yt+1
P(yt+1
P(yt+1
P(yt+1
(θ ) + γt+1P(θ |yt+1),
=
π (t+1)
B
(θ ) + pc
1−pc
1 + pc
1−pc
= (1 − γt+1)π (t+1)
B
where π (t+1)
B
(θ ) is defined in equation 2.8, and
(cid:5)
γt+1
= γ
SBF(yt+1
; π (t)),
(cid:6)
pc
1 − pc
(4.7)
(4.8)
with SBF defined in equation 2.7, and γ (S, m) defined in equation 2.9. Thus,
our calculation yields a specific choice of surprise (S = SBF) and a specific
value for the saturation parameter m = pc
1−pc
.
4.2 Derivation of the Optimization-Based Formulation of VarSMiLe:
Algorithm 1. To derive the optimization-based update rule for the Varia-
tional SMiLe rule and the relation of the bound Bt+1 with surprise, we used
the same approach used in Faraji et al. (2018).
4.2.1 Derivation of the Update Rule. Consider the general form of the fol-
lowing variational optimization problem:
∗
q
(θ ) = argmin DKL
q(θ ) s.t. DKL
(cid:10)
(cid:9)
q(θ )||p2(θ )
(cid:9)
(cid:10)
q(θ )||p1(θ )
< B and Eq[1] = 1,
(4.9)
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
n
e
c
o
a
r
t
i
c
e
-
p
d
/
l
f
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
c
o
_
a
_
0
1
3
5
2
p
d
.
/
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
310
V. Liakoni, A. Modirshanechi, W. Gerstner, and J. Brea
(cid:9)
(cid:10)
0, DKL[p1(θ )||p2(θ )]
where B ∈
solutions
. On the extremes of B, we will have trivial
(cid:3)
∗
q
(θ ) =
p2(θ )
p1(θ )
if B = 0
if B = DKL[p1(θ )||p2(θ )].
(4.10)
Note that the Kullback–Leibler divergence is a convex function with re-
spect to its first argument—q in our setting. Therefore, both the objective
function and the constraints of the optimization problem in equation 4.9
are convex. For convenience, we assume that the parameter space for θ is
discrete, but the final results can be generalized also to the continuous case
with some considerations (see Beal, 2003, and Faraji et al., 2018). For the
discrete setting, the optimization problem in equation 4.9 can be rewritten
as
∗
q
(θ ) = argmin
(cid:11)
(cid:7)
log(q(θ )) − log(p1(θ ))
q(θ )
q(θ ) s.t.
(cid:11)
(cid:7)
q(θ )
θ
θ
log(q(θ )) − log(p2(θ )) < B and
(cid:11)
θ
q(θ ) = 1.
(4.11)
For solving the mentioned problem, one should find a q that satisfies the
Karush-Kuhn-Tucker (KKT) conditions (Boyd & Vandenberghe, 2004) for
L =
(cid:11)
θ
(cid:5)
q(θ )log
(cid:6)
q(θ )
p1(θ )
+ λ
(cid:11)
θ
q(θ )log
(cid:5)
(cid:6)
q(θ )
p2(θ )
− λB + α − α
∂L
∂q(θ )
(cid:5)
(cid:6)
(cid:5)
(cid:6)
= log
q(θ )
p1(θ )
q(θ )
p2(θ )
= (1 + λ)log(q(θ )) − log(p1(θ )) − λlog(p2(θ )) + 1 + λ − α,
+ 1 + λlog
+ λ − α
(cid:11)
θ
q(θ ),
(4.12)
(4.13)
where λ and α are the parameters of the dual problem. Defining γ = λ
1+λ
and considering the partial derivative to be zero, we have
∗
log(q
(θ )) = (1 − γ )log(p1(θ )) + γ log(p2(θ )) + Const(α, γ ),
(4.14)
where α is always specified in a way to have Const(α, γ ) as the normaliza-
tion factor:
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
n
e
c
o
a
r
t
i
c
e
-
p
d
/
l
f
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
c
o
_
a
_
0
1
3
5
2
p
d
.
/
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
Const (α, γ ) = −log(Z(γ ))
(cid:11)
θ
where Z(γ ) =
p1−γ
1
γ
(θ )p
2 (θ ).
(4.15)
Learning in Volatile Environments With the Bayes Factor Surprise
311
According to the KKT conditions, λ ≥ 0, and as a result, γ ∈ [0, 1]. There-
fore, considering p1(θ ) = ˆπ (t+1)
(θ ) and p2(θ ) = P(θ |yt+1), the solution to the
B
optimization problem of equations 2.14 and 2.15 is 2.13.
4.2.2 Proof of the Claim That B Is a Decreasing Function of Surprise. Accord-
ing to the KKT conditions,
(cid:7)
λ
DKL[q
∗
(θ )||p2(θ )] − B
(cid:8)
= 0.
For the case that λ (cid:12)= 0 (i.e., γ (cid:12)= 0), we have B as a function of γ :
B(γ ) = DKL[q
∗
(θ )||p2(θ )]
(cid:5)
(cid:14)
= (1 − γ )Eq∗
log
(cid:6)(cid:15)
− log(Z(γ )).
p1(θ )
p2(θ )
(4.16)
(4.17)
Now, we show that the derivate of B(γ ) with respect to γ is always nonpos-
itive. To do so, we first compute the derivative of Z(γ ) as
∂log(Z(γ ))
∂γ
= 1
Z(γ )
∂
∂γ
(cid:11)
θ
p1−γ
1
γ
(θ )p
2 (θ )
(cid:5)
= 1
Z(γ )
(cid:14)
(cid:11)
θ
(cid:5)
= Eq∗
log
p1−γ
1
γ
(θ )p
2 (θ )log
(cid:6)(cid:15)
,
p2(θ )
p1(θ )
(cid:6)
p2(θ )
p1(θ )
(4.18)
and the derivate of Eq∗ [h(θ )] for an arbitrary h(θ ) as
∂Eq∗ [h(θ )]
∂γ
∂
∂γ
(cid:11)
θ
(cid:11)
=
=
=
(cid:11)
θ
∗
q
(θ )h(θ )
∗
q
(θ )h(θ )
∂
∂γ log(q
∗
(θ ))
∗
q
(θ )h(θ )
∂
∂γ
(cid:7)
(cid:8)
(1 − γ )log(p1(θ )) + γ log(p2(θ )) − log(Z(γ ))
θ
= Eq∗
(cid:5)
(cid:14)
h(θ )log
(cid:6)(cid:15)
p2(θ )
p1(θ )
(cid:10)
(cid:9)
h(θ )
Eq∗
− Eq∗
(cid:14)
(cid:5)
log
(cid:6)(cid:15)
.
p2(θ )
p1(θ )
(4.19)
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
n
e
c
o
a
r
t
i
c
e
-
p
d
/
l
f
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
c
o
_
a
_
0
1
3
5
2
p
d
.
/
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
312
V. Liakoni, A. Modirshanechi, W. Gerstner, and J. Brea
Using the last previous equations, we have
(cid:5)
(cid:14)(cid:5)
= −(1 − γ )
Eq∗
log
∂B(γ )
∂γ
(cid:5)
(cid:14)
(cid:5)
= −(1 − γ )Varq∗
log
p2(θ )
p1(θ )
p2(θ )
p1(θ )
(cid:6)(cid:15)
(cid:6)(cid:6)2(cid:15)
(cid:14)
(cid:5)
− Eq∗
log
(cid:6)(cid:15)2(cid:6)
p2(θ )
p1(θ )
≤ 0,
(4.20)
which means that B is a decreasing function of γ . Because γ is an increasing
function of surprise, B is also a decreasing function of surprise.
4.3 Derivations of Message Passing N: Algorithm 2. For the sake of
clarity and coherence, we repeat here some steps performed in section 2.
Following the idea of Adams and MacKay (2007) we first define the ran-
dom variable Rt = min{n ∈ N : Ct−n+1
= 1}. This is the time window from
the last change point. Then the exact Bayesian form for π (t)(θ ) can be writ-
ten as
π (t)(θ ) = P((cid:3)t+1
= θ |y1:t )
=
t(cid:11)
rt =1
P(rt|y1:t )P((cid:3)t+1
= θ |rt, y1:t ).
(4.21)
To have a formulation similar to the one of Particle Filtering, we rewrite the
belief as
π (t)(θ ) =
t−1(cid:11)
k=0
w(k)
t P((cid:3)t+1
= θ |Rt = t − k, y1:t ) =
t−1(cid:11)
k=0
w(k)
t
π (t)
k (θ ),
(4.22)
where π (t)
t − k, and w(k)
k (θ ) = P((cid:3)t = θ |Rt = t − k, y1:t ) is the term corresponding to Rt =
= P(Rt = t − k|y1:t ) is its corresponding weight at time t.
t
To update the belief after observing yt+1, one can use the exact Bayesian
(θ )
recursive formula, equation 2.10, for which one needs to compute π (t+1)
as
B
π (t+1)
B
(θ ) =
π (t)(θ )PY (yt+1
; π (t))
P(yt+1
|θ )
= PY (yt+1
P(yt+1
|θ )
; π (t))
t−1(cid:11)
k=0
w(k)
t P((cid:3)t+1
= θ |Rt = t − k, y1:t ).
(4.23)
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
n
e
c
o
a
r
t
i
c
e
-
p
d
/
l
f
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
c
o
_
a
_
0
1
3
5
2
p
d
.
/
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
Learning in Volatile Environments With the Bayes Factor Surprise
313
Using Bayes’ rule and the conditional independence of observations, we
have
π (t+1)
B
(θ ) = PY (yt+1
P(yt+1
|θ )
; π (t))
=
1
; π (t))
P(yt+1
t−1(cid:11)
k=0
t−1(cid:11)
k=0
P(yk+1:t
w(k)
t
|(cid:3)t+1
P(yk+1:t
= θ , Rt = t − k)π (0)(θ )
|Rt = t − k)
(cid:16)
t+1
i=k+1 PY (yi
P(yk+1:t
|θ )π (0)(θ )
|Rt = t − k)
w(k)
t
,
(4.24)
and once again, by using Bayes’ rule and the conditional independence of
observations, we find
π (t+1)
B
(θ ) =
=
1
; π (t))
P(yt+1
t−1(cid:11)
w(k)
t
P(yk+1:t+1
|Rt+1
= t − k + 1)
P(yk+1:t
|Rt = t − k)
× P((cid:3)t+1
= t − k + 1, yk+1:t+1)
1
; π (t))
P(yt+1
w(k)
t P(yt+1
|Rt+1
= t − k + 1, y1:t )
k=0
= θ |Rt+1
t−1(cid:11)
k=0
= θ |Rt+1
× P((cid:3)t+1
= t − k + 1, y1:t+1).
(4.25)
This gives us
π (t+1)
B
(θ ) =
t−1(cid:11)
w(k)
t
P(yt+1
P(yt+1
; π (t)
k )
; π (t))
×
k=0
× P((cid:3)t+1
= θ |Rt+1
= t − k + 1, y1:t+1),
(4.26)
and finally,
w(k)
B,t+1
= P(yt+1
P(yt+1
; π (t)
k )
; π (t))
w(k)
t
.
(4.27)
Using the recursive formula, the update rule for the weights for 0 ≤ k ≤
t − 1 is
w(k)
t+1
= (1 − γt+1)w(k)
B,t+1
= (1 − γt+1)
P(yt+1
P(yt+1
; π (t)
k )
; π (t))
w(k)
t
,
and for the newly added particle t,
w(t)
t+1
where γt+1
,
= γt+1
(cid:7)
= γ
SBF(yt+1
; π (t)), m = pc
1−pc
(cid:8)
of equation 2.9.
(4.28)
(4.29)
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
n
e
c
o
a
r
t
i
c
e
-
p
d
/
l
f
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
c
o
_
a
_
0
1
3
5
2
p
d
.
/
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
314
V. Liakoni, A. Modirshanechi, W. Gerstner, and J. Brea
The MPN algorithm uses equations 4.22, 4.28, and 4.29 for computing the
belief for t ≤ N, which is same as the exact Bayesian inference. For t > 氮,
it first updates the weights in the same fashion as equations 4.28 和 4.29,
keeps the greatest N weights, and sets the rest weights equal to 0. After nor-
malizing the new weights, it uses equation 4.22 (but only over the particles
with nonzero weights) to compute the belief ˆπ (t). (For the particular case of
exponential family, see algorithm 2 for the pseudocode.)
4.4 Derivation of the Weight Update for Particle Filtering: Algorithm
3. We derive here the weight update for the particle filter. 差异在于
our formalism from a standard derivation (Särkkä, 2013) is the absence of
, y1:t ) (西德:12)=
the Markov property of conditional observations (IE。, 磷(yt+1
|ct+1)). Our goal is to perform the approximation
磷(yt+1
|c1:t+1
磷(c1:t+1
|y1:t+1) ≈
氮(西德:11)
我=1
w(我)
t+1
δ(c1:t+1
− c(我)
1:t+1).
(4.30)
Given a proposal sampling distribution Q, for the weight of particle i at time
t + 1, 我们有
w(我)
t+1
∝
1:t+1
磷(C(我)
问(C(我)
1:t+1
|y1:t+1)
|y1:t+1)
∝
磷(C(我)
1:t+1
问(C(我)
, yt+1
|y1:t )
|y1:t+1)
1:t+1
,
w(我)
t+1
∝
磷(yt+1
问(C(我)
t+1
, C(我)
t+1
|C(我)
1:t
|C(我)
, y1:t )磷(C(我)
|y1:t )
1:t
1:t
, y1:t+1)问(C(我)
|y1:t )
1:t
,
(4.31)
where the only assumption for the proposal distribution Q is that the previ-
ous hidden states c(我)
1:t are independent of the next observation, yt+1, 哪个
allows keeping the previous samples c(我)
1:t to c(我)
1:t+1 and
writing the update of the weights in a recursive way (Särkkä, 2013).
1:t when going from c(我)
Notice that w(我)
t
step. 所以,
∝ P(C(我)
1:t
问(C(我)
1:t
|y1:t )
|y1:t )
are the weights calculated at the previous time
w(我)
t+1
∝
磷(yt+1
问(C(我)
t+1
, C(我)
t+1
|C(我)
1:t
|C(我)
, y1:t )
1:t
, y1:t+1)
w(我)
t
.
(4.32)
For the choice of Q, we use the optimal proposal function in terms of vari-
ance of the weights (Doucet et al., 2000):
问(C(我)
t+1
|C(我)
1:t
, y1:t+1) =P(C(我)
t+1
|C(我)
1:t
, y1:t+1).
(4.33)
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
/
e
d
你
n
e
C
哦
A
r
t
我
C
e
–
p
d
/
我
F
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
C
哦
_
A
_
0
1
3
5
2
p
d
.
/
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
Learning in Volatile Environments With the Bayes Factor Surprise
315
Using Bayes’ rule and equations 4.32 和 4.33, after a few steps of algebra,
我们有
w(我)
t+1
∝
磷(yt+1
磷(C(我)
t+1
, C(我)
t+1
|C(我)
1:t
(西德:12)
|C(我)
, y1:t )
1:t
, y1:t+1)
w(我)
t
=P(yt+1
|C(我)
1:t
, y1:t )w(我)
t
∝
(1 − pc)磷(yt+1
|C(我)
1:t
, y1:t, C(我)
t+1
= 0) + pcP(yt+1
|C(我)
1:t
, y1:t, C(我)
t+1
(西德:13)
= 1)
w(我)
t
.
(4.34)
Using the definition in equation 2.6, we have P(yt+1
磷(yt+1
= 1) =P(yt+1
) 和P(yt+1
, y1:t, C(我)
t+1
; ˆπ (t)
我
|C(我)
1:t
|C(我)
1:t
, y1:t, C(我)
= 0) =
t+1
; 圆周率 (0)). 所以, 我们有
(西德:17)
w(我)
t+1
=
(1 − pc)磷(yt+1
; ˆπ (t)
我
) + pcP(yt+1
(西德:18)
; 圆周率 (0))
w(我)
t
/Z,
where Z is the normalization factor,
Z = (1 − pc)磷(yt+1
; ˆπ (t)) + pcP(yt+1
; 圆周率 (0)),
where we have
磷(yt+1
; ˆπ (t)) =
氮(西德:11)
我=1
w(我)
t P(yt+1
; ˆπ (t)
我
).
(4.35)
(4.36)
(4.37)
We now compute the weights corresponding to π (t+1)
2.8
乙
as defined in equation
w(我)
乙,t+1
=P(yt+1
磷(yt+1
; ˆπ (t)
)
我
; ˆπ (t))
w(我)
t
.
(4.38)
Combining equations 4.35, 4.36, 和 4.38, we can then rewrite the weight
update rule as
w(我)
t+1
= (1 − γt+1)w(我)
乙,t+1
+ γt+1
w(我)
t
,
(西德:12)
(西德:13)
(4.39)
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
/
e
d
你
n
e
C
哦
A
r
t
我
C
e
–
p
d
/
我
F
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
C
哦
_
A
_
0
1
3
5
2
p
d
.
/
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
where γt+1
= γ
SBF(yt+1
; ˆπ (t)),
个人电脑
1−pc
of equation 2.9.
At every time step t + 1 we sample each particle’s hidden state ct+1 from
the proposal distribution. Using equation 4.33, 我们有
316
V. Liakoni, A. Modirshanechi, 瓦. Gerstner, 和 J. Brea
问(C(我)
t+1
= 1|C(我)
1:t
, y1:t+1) =
(1 − pc)磷(yt+1
(西德:5)
pcP(yt+1
; ˆπ (t)
我
; 圆周率 (0))
) + pcP(yt+1
(西德:6)
; 圆周率 (0))
= γ
SBF(yt+1
; ˆπ (t)
我
),
.
(4.40)
个人电脑
1 − pc
We implemented the Sequential Importance Resampling algorithm
(Doucet et al., 2000; Gordon et al., 1993), where the particles are resampled
when their effective number falls below a threshold. The effective number
of the particles is defined as (Doucet et al., 2000; Särkkä, 2013)
Neff
≈
(西德:19)
1
我=1(w(我)
氮
t )2
.
(4.41)
When Neff is below a critical threshold, the particles are resampled with
replacement from the categorical distribution defined by their weights, 和
all their weights are set to w(我)
= 1/N. We did not optimize the parameter
t
Neff, and following Doucet and Johansen (2009), we performed resampling
when Neff
≤ N/2.
4.5 Surprise-Modulation as a Framework for Other Algorithms.
4.5.1 SMiLe Rule. The Confidence Corrected Surprise (Faraji et al., 2018)
是
SCC(yt+1
; ˆπ (t)) = DKL
(西德:9)
(西德:10)
ˆπ (t)(我 )|| ˜P(我 |yt+1)
,
where ˜P(我 |yt+1) is the scaled likelihood defined as
˜P(我 |yt+1) =
(西德:20)
PY (yt+1
PY (yt+1
|我 )
|我 (西德:5))dθ (西德:5)
.
(4.42)
(4.43)
Note that this becomes equal to P(我 |yt+1) if the prior belief π (0) is a uni-
form distribution (参见方程 2.11).
With the aim of minimizing the Confidence Corrected Surprise by up-
dating the belief during time, Faraji et al. (2018) suggested an update rule
solving the optimization problem,
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
/
e
d
你
n
e
C
哦
A
r
t
我
C
e
–
p
d
/
我
F
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
C
哦
_
A
_
0
1
3
5
2
p
d
.
/
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
ˆπ (t+1)(我 ) = arg min
DKL
(西德:9)
q
s.t. DKL
(西德:9)
(西德:10)
q(我 )|| ˜P(我 |yt+1)
(西德:10)
q(我 )|| ˆπ (t)(我 )
≤ Bt+1
,
(4.44)
Learning in Volatile Environments With the Bayes Factor Surprise
317
(西德:10)
(西德:9)
0, DKL[磷(我 |yt+1)|| ˆπ (t)(我 )]
where Bt+1
showed that the solution to this optimization problem is
∈
is an arbitary bound. 作者
(西德:7)
(西德:8)
ˆπ (t+1)(我 )
日志
= (1 − γt+1) 日志
(西德:8)
(西德:7)
ˆπ (t)(我 )
+ γt+1 log
(西德:7)
(西德:8)
˜P(我 |yt+1)
+ Const.,
(4.45)
where γt+1
4.44.
∈ [0, 1] is specified so that it satisfies the constraint in equation
Although equation 4.45 looks very similar to equation 2.13, it signifies
a trade-off between the latest belief ˆπ (t) and the belief updated by only the
most recent observation ˜P(我 |yt+1), 那是, a trade-off between adherence to
the current belief and reset. While SMiLe adheres to the current belief ˆπ (t),
Variational SMiLe integrates the new observation with the current belief to
get ˆπ (t)
乙 , which leads to a trade-off similar to the one of the exact Bayesian
inference (see equations 2.12 和 2.10).
To modulate the learning rate by surprise, Faraji et al. (2018) 经过考虑的
the boundary Bt+1 as a function of the Confidence Corrected Surprise,
Bt+1
= Bmax
C
(西德:13)
(西德:12)
SCC(yt+1), 米
where Bmax
= DKL[磷(我 |yt+1)|| ˆπ (t)(我 )],
(4.46)
where m is a free parameter. 然后, γt+1 is found by satisfying the constraint
of the optimization problem in equation 4.44 using equations 4.45 和 4.46.
|μt+1
∼ N (μt+1
4.5.2 Nassar’s Algorithm. For the particular case that observations are
drawn from a gaussian distribution with known variance and unknown
, σ 2) and θt = μt, Nassar et al. (2012, 2010) 骗局-
意思是, yt+1
sidered the problem of estimating the expected μt and its variance rather
than a probability distribution (IE。, 信仰) over it, implicitly assuming that
the belief is always a gaussian distribution. The algorithms of Nassar et al.
(2012, 2010) were developed for the case that whenever the environment
变化, the mean μt+1 is drawn from a uniform prior with a range of val-
ues much larger than the width of the gaussian likelihood function. 这
authors showed that in this case, the expected μt+1 (IE。, ˆμt+1) estimated by
the agent upon observing a new sample yt+1 is
ˆμt+1
= ˆμt + αt+1(yt+1
− ˆμt ),
with αt+1 the adaptive learning rate given by
αt+1
= 1 + (西德:12)t+1 ˆrt
1 + ˆrt
,
(4.47)
(4.48)
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
/
e
d
你
n
e
C
哦
A
r
t
我
C
e
–
p
d
/
我
F
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
C
哦
_
A
_
0
1
3
5
2
p
d
.
/
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
318
V. Liakoni, A. Modirshanechi, 瓦. Gerstner, 和 J. Brea
数字 12: Gaussian estimation task: Transient performance after changes for
original algorithms of Nassar et al. (2010) and Nassar et al. (2012). Mean squared
error for the estimation of μ
t at each time step n after an environmental change,
= 0.1 (left panel)
= n] 随着时间的推移; σ = 0.1, 个人电脑
那是, the average of MSE[ ˆ(西德:3)
t
= 0.01 (right panel). The shaded area corresponds to the standard
and σ = 5, 个人电脑
error of the mean. Nas10∗, Nas12∗: variants of Nassar et al. (2010) and Nassar
等人. (2012), 分别, Nas10 Original, Nas12 Original: Original algorithms
of Nassar et al. (2010) and Nassar et al. (2012), 分别.
|Rt
where ˆrt is the estimated time since the last change point (IE。, the esti-
mated Rt = min{n ∈ N : Ct−n+1
= 1|y1:t+1) the prob-
= 1) 和 (西德:12)t+1
ability of a change given the observation. Note that this quantity (i.e. 这
posterior change point probability) is the same as our adaptation rate γt+1
of equation 2.9.
=P(ct+1
We next extend their approach to a more general case where the prior is
a gaussian distribution with arbitrary variance, μt+1
0 ). We then
discuss the relation of this method to Particle Filtering. A performance com-
∗
parison between our extended algorithms Nas10
和他们的
original versions Nas10 and Nas12 is depicted in Figures 12 and 13.
∗
and Nas12
∼ N (μ
0
, σ 2
4.5.3 Nas10∗ and Nas12∗ Algorithms. Let us consider that y1:t are ob-
服务, the time since the last change point rt is known, and the agent’s
current estimation of μt is ˆμt. It can be shown (see the appendix for the
derivation) that the expected μt+1 (IE。, ˆμt+1) upon observing the new sam-
ple yt+1 is
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
/
e
d
你
n
e
C
哦
A
r
t
我
C
e
–
p
d
/
我
F
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
C
哦
_
A
_
0
1
3
5
2
p
d
.
/
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
(西德:5)
= (1 − γt+1)
ˆμt +
ˆμt+1
1
ρ + rt + 1
(西德:6)
− ˆμt )
(yt+1
(西德:6)
(西德:5)
+ γt+1
μ
0
+ 1
ρ + 1
(yt+1
− μ
0)
,
(4.49)
Learning in Volatile Environments With the Bayes Factor Surprise
319
数字 13: Gaussian estimation task: Steady-state performance for original al-
gorithms of Nassar et al. (2010) and Nassar et al. (2012). Difference between the
mean squared error of each algorithm and the optimal solution (Exact Bayes),
那是, 平均数 (西德:8)均方误差[ ˆ(西德:3)
t] over time for each combination of environmental
parameters σ and pc. Abbreviations: Nas10∗, Nas12∗: Variants of Nassar et al.
(2010) and Nassar et al. (2012) 分别; Nas10 Original, Nas12 Original:
Original algorithms of Nassar et al. (2010) and Nassar et al. (2012) respectively.
where ρ = σ 2
σ 2
0
tation rate of equation 2.9.
, μ
0 is the mean of the prior distribution, and γt+1 is the adap-
We can see that the updated mean is a weighted average, with surprise-
modulated weights, between integrating the new observation with the cur-
rent mean ˆμt and integrating it with the prior mean μ
0, in the same spirit as
the other algorithms we considered here. 方程 4.49 can also be seen as a
surprise-modulated weighted sum of two delta rules: one including a pre-
− ˆμt )
diction error between the new observation and the current mean (yt+1
and one including a prediction error between the observed sample and the
prior mean (yt+1
− μ
0).
In order to obtain a form similar to the one of Nassar et al. (2012, 2010),
we can rewrite the above formula as
ˆμt+1
=
ρ
ρ + 1
(西德:12)
ˆμt + γt+1(μ
0
(西德:13)
− ˆμt )
+ 1
ρ + 1
(西德:12)
ˆμt + αt+1(yt+1
(西德:13)
− ˆμt )
,
(4.50)
= ρ+γt+1rt +1
ρ+rt +1
where we have defined αt+1
. 因此, the update rule takes the
form of a weighted average, with fixed weights, between two delta rules:
one including a prediction error between the prior mean and the current
意思是 (μ
− ˆμt ) and one including a prediction error between the observed
0
− ˆμt ), both with surprise-modulated
sample and the current mean (yt+1
learning rates.
In Nassar et al. (2012, 2010), the true new mean after a change point is
drawn from a uniform distribution with a range of values much larger than
the width of the gaussian likelihood. Their derivations implicitly approx-
imate the uniform distribution with a gaussian distribution with σ
(西德:14) σ .
0
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
/
e
d
你
n
e
C
哦
A
r
t
我
C
e
–
p
d
/
我
F
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
C
哦
_
A
_
0
1
3
5
2
p
d
.
/
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
320
V. Liakoni, A. Modirshanechi, 瓦. Gerstner, 和 J. Brea
Note that if σ
= 1+γt+1rt
appears, and αt+1
1+rt
gorithm in equations 4.47 和 4.48, with γt+1
(西德:14) σ , then ρ → 0, so that the first term of equation 4.50 dis-
. This results in the delta rule of the original al-
= (西德:12)t+1.
0
All of the calculations so far were done by assuming that rt is known.
然而, for the case of a nonstationary regime with a history of change
点, the time interval rt is not known. Nassar et al. (2012, 2010) used the
expected time interval ˆrt as an estimate. We make a distinction here between
Nassar et al. (2012) and Nassar et al. (2010):
In Nassar et al. (2010) ˆrt is calculated recursively on each trial in the same
= (1 − γt+1)(ˆrt + 1) + γt+1, 那是, at each time
spirit as equation 2.10: ˆrt+1
step, there is a probability (1 − γt+1) that ˆrt increments by 1 and a prob-
ability γt+1 that it is reset to 1. So ˆrt+1 is the weighted sum of these two
结果. 因此, 方程 4.50 combined with the expected time interval
ˆrt constitutes a generalization of the update rule of Nassar et al. (2010) 为了
the case of gaussian prior N (μ
(看到
0
appendix for pseudocode).
0 ). We call this algorithm Nas10
, σ 2
∗
In Nassar et al. (2012), the variance ˆσ 2
t+1
= Var[μt+1
= σ 2
ˆσ 2
t+1
|y1:t+1] is estimated
− σ 2
is computed.
σ 2
0
given ˆμt, ˆrt, and ˆσ 2
t . Based on this variance, ˆrt+1
The derivation of the recursive computation of ˆσ 2
t+1 for the case of gaussian
priors can be found in the appendix. We call the combination of equation
∗
4.50 with this way of computing the expected time interval ˆrt Nas12
(看
the appendix for pseudocode). These two versions of calculating ˆrt in Nas-
sar et al. (2010) and Nassar et al. (2012) give different results, and we com-
∗
pare our algorithms with both Nas10
in our simulations. 笔记
那, as discussed in section 2.1, the posterior belief at time t + 1 不
generally belong to the same family of distributions as the belief of time t.
然而, we therefore approximate for both algorithms the posterior belief
磷(我 |y1:t+1) by a gaussian.
∗
and Nas12
4.5.4 Nassar’s Algorithm and Particle Filtering with One Particle. 在里面
case of Particle Filtering (参见方程 2.23) with only one particle, 在每一个
time step, we sample the particle’s hidden state with change probability
, y1:t+1) = γt+1, generating a posterior belief that takes two
问(C(1)
t+1
possible values with probability (according to the proposal distribution)
= 1|C(1)
1:t
(西德:12)
ˆπ (t+1)(我 ) = ˆπ (t+1)
问
(西德:12)
ˆπ (t+1)(我 ) =P(我 |yt+1)| yt+1
(我 )| yt+1
乙
问
(西德:13)
(西德:13)
= 1 − γt+1
,
= γt+1
.
(4.51)
所以, in expectation, the updated belief will be
乙
问[ ˆπ (t+1)(我 )] = (1 − γt+1) ˆπ (t+1)
乙
(我 ) + γt+1P(我 |yt+1).
(4.52)
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
/
e
d
你
n
e
C
哦
A
r
t
我
C
e
–
p
d
/
我
F
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
C
哦
_
A
_
0
1
3
5
2
p
d
.
/
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
Learning in Volatile Environments With the Bayes Factor Surprise
321
If we apply equation 4.52 to ˆμt+1, we find that E
generalization of Nassar et al. (2010) (参见方程 4.49).
问[ ˆμ(t+1)], is identical to the
而且, in Particle Filtering with a single particle, we sample the parti-
cle’s hidden state, which is equivalent to sampling the interval ˆRt+1. 因为
ˆRt+1 takes the value ˆrt + 1 和 (1 − γt+1) and the value of 1 (=reset) 和
probability γt+1, the expected value of ˆRt+1 is
乙
问[ ˆRt+1] = (1 − γt+1)(ˆrt + 1) + γt+1
.
(4.53)
换句话说, in Nassar et al. (2010), the belief is updated based on the ex-
pected ˆrt, whereas in Particle Filtering with one particle, the belief is updated
using the sampled ˆrt.
总之, the two methods will give different estimates on a trial-per-
trial basis, but the same result in expectation. The pseudocode for Particle
Filtering with one particle for the particular case of the gaussian estimation
task is in the appendix.
4.6 Application to the Exponential Family. For our three algorithms—
Variational SMiLe, Message Passing with fixed number N of particles,
ˆπ (t+1)(我 )
and Particle Filtering—we derive compact update rules for
when the likelihood function PY (y|我 ) is in the exponential family and
圆周率 (0)(我 ) is its conjugate prior. In that case, the likelihood function has the
形式
(西德:7)
PY (y|我 ) = h(y)经验值
(西德:8)
θ T φ(y) − A(我 )
,
(4.54)
where θ is the vector of natural parameters, H(y) is a positive function, φ(y) 是
the vector of sufficient statistics, 和一个(我 ) is the normalization factor. 然后
the conjugate prior π (0) has the form
(西德:7)
圆周率 (0)(我 ) = Pπ
(西德:3) = θ ; χ (0), ν (0)
(西德:8)
= ˜h(我 ) F
(西德:7)
χ (0), ν (0)
(西德:8)
(西德:7)
经验值
(西德:8)
θ T χ (0) − ν (0)A(我 )
,
(4.55)
where χ (0) and ν (0) are the distribution parameters, ˜h(我 ) is a positive
(西德:7)
function, and f
is the normalization factor. For this setting and
(西德:8)
(西德:7)
while π (t) = Pπ
, the Bayes Factor Surprise has the compact
形式
χ (0), ν (0)
(西德:3) = θ ; χ (t), ν (t)
(西德:8)
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
/
e
d
你
n
e
C
哦
A
r
t
我
C
e
–
p
d
/
我
F
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
C
哦
_
A
_
0
1
3
5
2
p
d
.
/
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
(西德:12)
SBF
yt+1
; Pπ
(西德:7)
(西德:3) = θ ; χ (t), ν (t)
(西德:8)(西德:13)
(西德:7)
(西德:7)
= f
F
χ (t) + φ(yt+1), ν (t) + 1
χ (0) + φ(yt+1), ν (0) + 1
(西德:7)
(西德:8)
(西德:8) F
F
χ (0), ν (0)
(西德:7)
χ (t), ν (t)
(西德:8)
(西德:8) .
(4.56)
322
V. Liakoni, A. Modirshanechi, 瓦. Gerstner, 和 J. Brea
The pseudocode for Variational SMiLe, MPN, and Particle Filtering can
be seen in algorithms 1, 2, 和 3, respectively.
4.7 Simulation Task. 在这个部分, we first argue why the mean
squared error is a proper measure for comparing different algorithms with
彼此, and then we explain the version of Leaky integrator that we used
for simulations.
4.7.1 Mean Squared Error as an Optimality Measure. Consider the case
that at each time point t, the goal of an agent is to have an estimation
of the parameter (西德:3)t as a function of the observations Y1:t, 那是, ˆ(西德:3)t =
F (Y1:t ). The estimator that minimizes the mean squared error MSE[ ˆ(西德:3)t] =
乙
磷(Y1:t ,(西德:3)t )
( ˆ(西德:3)t − (西德:3)t )2
是
(西德:10)
(西德:9)
ˆ(西德:3)Opt
t
= E
磷((西德:3)t |Y1:t )
(西德:10)
(西德:9)
(西德:3)t
= Eπ (t)
(西德:9)
(西德:3)t
(西德:10)
,
(4.57)
which is the expected value of (西德:3)t conditioned on the observations Y1:t, 或者, 在
other words under the Bayes-optimal current belief (see Papoulis & Saun-
德斯, 1989, for a proof). The MSE for any other estimator ˆ(西德:3)t can be written
作为 (see below for the proof)
均方误差[ ˆ(西德:3)t] = MSE[ ˆ(西德:3)Opt
t
] + (西德:8)均方误差[ ˆ(西德:3)t],
(西德:17)
(西德:18)
在哪里 (西德:8)均方误差[ ˆ(西德:3)t] = E
磷(Y1:t )
( ˆ(西德:3)t − ˆ(西德:3)Opt
t
)2
≥ 0.
(4.58)
This means that the MSE for any arbitrary estimator ˆ(西德:3)t includes two terms:
the optimal MSE and the mismatch of the actual estimator from the opti-
mal estimator ˆ(西德:3)Opt
. 因此, if the estimator we are interested in is the
expected value of (西德:3)t under the approximate belief ˆπ (t) computed by each of
(西德:10)
our algorithms (IE。, ˆ(西德:3)(西德:5)
), the second term in equation 4.58, 那
t
是, the deviation from optimality, is a measure of how good the approxima-
tion is.
(西德:9)
(西德:3)t
= E
ˆπ (t)
t
Proof for the Algorithms without Sampling. Consider ˆ(西德:3)Opt
= fOpt(Y1:t ).
然后, for any other arbitrary estimator ˆ(西德:3)t = f (Y1:t ) (except for the ones with
sampling), 我们有
t
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
/
e
d
你
n
e
C
哦
A
r
t
我
C
e
–
p
d
/
我
F
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
C
哦
_
A
_
0
1
3
5
2
p
d
.
/
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
均方误差[ ˆ(西德:3)t] = E
= E
磷(Y1:t ,(西德:3)t )
磷(Y1:t ,(西德:3)t )
= E
磷(Y1:t ,(西德:3)t )
(西德:9)
(西德:10)
(西德:9)
( ˆ(西德:3)t − (西德:3)t )2
( F (Y1:t ) - (西德:3)t )2
(西德:17)(西德:7)
(西德:10)
(西德:8)
( F (Y1:t ) − fOpt(Y1:t )) + ( fOpt(Y1:t ) - (西德:3)t )
(西德:18)
.
2
(4.59)
Learning in Volatile Environments With the Bayes Factor Surprise
323
The quadratic term in the last line can be expanded and written as
均方误差[ ˆ(西德:3)t] = E
磷(Y1:t ,(西德:3)t )
(西德:17)
(西德:18)
( F (Y1:t ) − fOpt(Y1:t ))2
(西德:18)
(西德:17)
+ 乙
磷(Y1:t ,(西德:3)t )
+ 2乙
磷(Y1:t ,(西德:3)t )
( fOpt(Y1:t ) - (西德:3)t )2
(西德:17)
(西德:18)
.
( F (Y1:t ) − fOpt(Y1:t ))( fOpt(Y1:t ) - (西德:3)t )
(4.60)
The random variables in the expected value of the first line are not depen-
dent on (西德:3)t, so it can be computed over Y1:t. The expected value of the second
line is equal to MSE[ ˆ(西德:3)Opt
]. It can also be shown that the expected value of
the third line is equal to 0,
t
3rd line = 2E
磷(Y1:t )
(西德:17)
(西德:17)
(西德:17)
(西德:18)(西德:18)
乙
磷((西德:3)t |Y1:t )
( F (Y1:t ) − fOpt(Y1:t ))( fOpt(Y1:t ) - (西德:3)t )
= 2E
磷(Y1:t )
( F (Y1:t ) − fOpt(Y1:t ))( fOpt(Y1:t ) − E
(西德:18)
磷((西德:3)t |Y1:t )[(西德:3)t])
= 0,
(4.61)
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
/
e
d
你
n
e
C
哦
A
r
t
我
C
e
–
p
d
/
我
F
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
C
哦
_
A
_
0
1
3
5
2
p
d
.
/
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
where in the last line, we used the definition of the optimal estimator. To-
在一起, 我们有
均方误差[ ˆ(西德:3)t] = MSE[ ˆ(西德:3)Opt
t
] + 乙
磷(Y1:t )
( ˆ(西德:3)t − ˆ(西德:3)Opt
t
)2
(西德:17)
(西德:18)
.
(4.62)
(西德:2)
Proof for the Algorithms with Sampling. For particle filtering (and any
kind of estimator with sampling), the estimator is not a deterministic func-
tion of observations Y1:t. 相当, the estimator is a function of observations
as well as a set of random variables (样品) drawn from a distribution
that is also a function of observations Y1:t. In our case, the samples are the
sequence of hidden states C1:t. The estimator can be written as
ˆ(西德:3)PF
t
= f (Y1:t, C(1:氮)
1:t
),
(4.63)
1:t
where C(1:氮)
are N independent and identically distributed samples drawn
from the proposal distribution Q(C1:t|Y1:t ). MSE for this estimator should
also be averaged over the samples, which leads to
均方误差[ ˆ(西德:3)PF
t ] = E
磷(Y1:t ,(西德:3)t )问(C(1:氮)
1:t
|Y1:t )
= E
磷(Y1:t ,(西德:3)t )问(C(1:氮)
1:t
|Y1:t )
(西德:9)
(西德:9)
(西德:10)
( ˆ(西德:3)PF
t
- (西德:3)t )2
( F (Y1:t, C(1:氮)
1:t
) - (西德:3)t )2
(西德:10)
.
(4.64)
324
V. Liakoni, A. Modirshanechi, 瓦. Gerstner, 和 J. Brea
Similar to what we did before, the MSE for particle filtering can be written
作为
均方误差[ ˆ(西德:3)PF
t ] = MSE[ ˆ(西德:3)Opt
t
] + 乙
= MSE[ ˆ(西德:3)Opt
t
] + 乙
1:t
磷(Y1:t )问(C(1:氮)
(西德:17)
乙
磷(Y1:t )
(西德:17)
|Y1:t )
( ˆ(西德:3)PF
t
(西德:9)
(西德:18)
− ˆ(西德:3)Opt
t
)2
(西德:10)(西德:18)
,
− ˆ(西德:3)Opt
t
)2
(4.65)
( ˆ(西德:3)PF
t
问(C(1:氮)
1:t
|Y1:t )
which can be written in terms of bias and variance over samples as
均方误差[ ˆ(西德:3)PF
t ] = MSE[ ˆ(西德:3)Opt
t
] + 乙
磷(Y1:t )
+ BiasQ(C(1:氮)
1:t
|Y1:t )( ˆ(西德:3)PF
t
(西德:17)
VarQ(C(1:氮)
1:t
(西德:18)
.
, ˆ(西德:3)Opt
t
)2
|Y1:t )( ˆ(西德:3)PF
t
)
(4.66)
(西德:2)
4.7.2 Leaky Integration. For the gaussian task, the goal is to have an es-
timation of the mean of the gaussian distribution at each time t, denoted
by ˆθt. Given a leak parameter ω ∈ (0, 1], the leaky integrator estimation
是
ˆθt =
(西德:19)
t
k=1
(西德:19)
t
k=1
ωt−kyk
ωt−k
.
(4.67)
For the categorical task, the goal is to have an estimation of the param-
eters of the categorical distribution at each time t, denoted by ˆθt = [ ˆθ
我,t]氮
我=1
for the case that there are N categories. Given a leak parameter ω ∈ (0, 1],
the leaky integrator estimation is
ˆθ
我,t
=
(西德:19)
t
k=1
ωt−kδ(yk
(西德:19)
ωt−k
t
k=1
− i)
,
where δ is the Kronecker delta function.
(4.68)
4.8 Derivation of the Formula Relating Shannon Surprise to the Mod-
ulated Learning Rate. Given the defined generative model, the Shannon
surprise upon observing yt+1 can be written as
SSh(yt+1
; 圆周率 (t)) = log
= log
(西德:6)
1
磷(yt+1
|y1:t )
(西德:5)
(西德:5)
1
(西德:6)
(1 − pc)磷(yt+1
; 圆周率 (t)) + pcP(yt+1
; 圆周率 (0))
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
/
e
d
你
n
e
C
哦
A
r
t
我
C
e
–
p
d
/
我
F
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
C
哦
_
A
_
0
1
3
5
2
p
d
.
/
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
Learning in Volatile Environments With the Bayes Factor Surprise
325
(西德:5)
= log
(西德:6)
1
; 圆周率 (0))
磷(yt+1
+ 日志
= SSh(yt+1
; 圆周率 (0)) + 日志
(西德:5)
γt+1
个人电脑
(西德:12)
(西德:13)
(西德:6)
1
1 + 1
米
1
SBF (yt+1
;圆周率 (t) )
(西德:5)
1
个人电脑
(西德:6)
,
(4.69)
= γ
; 圆周率 (t)), m = pc
where γt+1
of equation 2.9. 因此, 这
1−pc
modulated adaptation rate can be written as in equation 2.25 and the Bayes
Factor Surprise as in equation 3.2.
SBF(yt+1
4.9 Experimental Predictions.
4.9.1 Setting. Consider a gaussian task where Yt can take values in R.
The likelihood function PY (y|我 ) is defined as
PY (y|我 ) = N (y; 我 , σ 2),
(4.70)
where σ ∈ R+
is the standard deviation and θ ∈ R is the mean of the distri-
bution, 那是, the parameter of the likelihood. Whenever there is a change
in the environment (with probability pc ∈ (0, 1)), the value θ is drawn from
the prior distribution π (0)(我 ) = N (我 ; 0, 1).
4.9.2 Theoretical Proofs for Prediction 1. For our theoretical derivations for
our first prediction, we consider the specific but relatively mild assumption
that the subjects’ belief π (t) at each time is a gaussian distribution,
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
/
e
d
你
n
e
C
哦
A
r
t
我
C
e
–
p
d
/
我
F
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
C
哦
_
A
_
0
1
3
5
2
p
d
.
/
圆周率 (t)(我 ) = N (我 ; ˆθt, ˆσ 2
t ),
(4.71)
where ˆθt and ˆσt are determined by the learning algorithm and the sequence
of observations y1:t. This is the case when the subjects use either VarSMiLe,
Nas10∗, Nas12∗, pf1, MP1, or Leaky Integration as their learning rule. 和
such assumptions, the inferred probability distribution P(y; 圆周率 (t)) 可
written as
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
磷(y; 圆周率 (t)) = N (y; ˆθt, σ 2 + ˆσ 2
t ).
(4.72)
− ˆθt and the “sign bias” as st+1
As mentioned in section 2, 我们定义, at time t, the prediction error as
δt+1
= sign(δt+1 ˆθt ). 然后, given an
= yt+1
absolute prediction ˆθ > 0, an absolute prediction error δ > 0, a standard de-
viation σ
C, and a sign bias s ∈ {−1, 1}, the average Bayes Factor Surprise is
computed as
326
V. Liakoni, A. Modirshanechi, 瓦. Gerstner, 和 J. Brea
(西德:11)
SBF(yt; ˆπ (t−1)),
¯SBF( ˆθ , δ, s, σ
C) = 1
|时间 |
t∈T
where T = {t : | ˆθt−1
| = ˆθ , |δt| = δ, ˆσt = σ
C
, st = s}.
(4.73)
It can easily be shown that the value SBF(yt; ˆπ (t−1)) is the same for all t ∈ T ,
and hence the average surprise is the same as the surprise for each time
观点. 例如, the average surprise for s = +1 is equal to
¯SBF( ˆθ , δ, s = +1, σ
C) =
氮 ( ˆθ + δ; 0, σ 2 + 1)
氮 (δ; 0, σ 2 + σ 2
C )
.
(4.74)
Similar formulas can be computed for ¯SBF( ˆθ , δ, s = −1, σ
参考 (西德:8) ¯SBF( ˆθ , δ, σ
computed as
C) = ¯SBF( ˆθ , δ, s = +1, σ
C) − ¯SBF( ˆθ , δ, s = −1, σ
C). Then the dif-
C) 可
(西德:8) ¯SBF( ˆθ, δ, σ
C) =
氮 ( ˆθ + δ; 0, σ 2 + 1) − N ( ˆθ − δ; 0, σ 2 + 1)
氮 (δ; 0, σ 2 + σ 2
C )
.
(4.75)
It can be shown that
(西德:8) ¯SBF( ˆθ, δ, σ
C) < 0 and
∂
∂δ
(cid:8) ¯SBF( ˆθ , δ, σ
C) < 0
(4.76)
for all ˆθ > 0, δ > 0, and σ
C
for the second inequality is given below.
> 0. The first inequality is trivial, and the proof
The average Shannon Surprise can be computed in a similar way. 为了
例子, for s = +1, 我们有
¯SSh( ˆθ , δ, s = +1, σ
C)
= − log
(西德:12)
(西德:13)
,
pcN ( ˆθ + δ; 0, σ 2 + 1) + (1 − pc)氮 (δ; 0, σ 2 + σ 2
C )
(4.77)
and then the difference (西德:8) ¯SSh( ˆθ , δ, σ
−1, σ
C) can be computed as
C) = ¯SSh( ˆθ , δ, s = +1, σ
C) − ¯SSh( ˆθ , δ, s =
(西德:8) ¯SSh( ˆθ , δ, σ
C) = log
(西德:5)
1 + m ¯SBF( ˆθ , δ, s = −1, σ
1 + m ¯SBF( ˆθ , δ, s = +1, σ
C)
C)
(西德:6)
,
(4.78)
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
/
e
d
你
n
e
C
哦
A
r
t
我
C
e
–
p
d
/
我
F
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
C
哦
_
A
_
0
1
3
5
2
p
d
.
/
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
Learning in Volatile Environments With the Bayes Factor Surprise
327
where m = pc
1−pc
有
. 然后, using the results for the Bayes Factor Surprise, 我们
(西德:8) ¯SSh( ˆθ, δ, σ
C) > 0 和
∂
∂δ
(西德:8) ¯SSh( ˆθ , δ, σ
C) > 0,
for all ˆθ > 0, δ > 0, and σ
C
> 0.
Proof of the Second Inequality for SBF. Let us define the variables
σ 2
d
σ 2
n
,
= σ 2 + σ 2
C
= σ 2 + 1,
as well as the functions
f1(δ) = ¯SBF( ˆθ , δ, s = +1, σ
C) =
f2(δ) = ¯SBF( ˆθ , δ, s = −1, σ
C) =
(西德:5)
经验值
(西德:5)
经验值
σ
d
σn
σ
d
σn
δ2
2σ 2
d
δ2
2σ 2
d
- (δ + ˆθ )2
2σ 2
n
- (δ − ˆθ )2
2σ 2
n
(西德:6)
(西德:6)
,
,
F (δ) = (西德:8) ¯SBF( ˆθ , δ, σ
C) = f1(δ) − f2(δ).
The following inequalities hold true:
F (δ) < 0 ⇒ f1(δ) < f2(δ), < σ 2 0 = 1 ⇒ σ 2 d < σ 2 n . σ 2 C Then, the derivative of f (δ) can be computed as (cid:5) d dδ f (δ) = f1(δ) (cid:7) ˆθ σ 2 n = − δ σ 2 d − δ + ˆθ σ 2 n (cid:8) f1(δ) + f2(δ) (cid:6) δ − ˆθ σ 2 n − (cid:5) − f2(δ) (cid:5) δ σ 2 d 1 − f1(δ) − f2(δ) f1(δ) + f2(δ) δ ˆθ (cid:6) (cid:5) (cid:6)(cid:6) σ 2 n σ 2 d − 1 < 0. Therefore, we have ∂ ∂δ (cid:8) ¯SBF( ˆθ , δ, σ C) = d dδ f (δ) < 0. (4.79) (4.80) (4.81) (4.82) (4.83) (cid:2) l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . / e d u n e c o a r t i c e - p d / l f / / / / 3 3 2 2 6 9 1 8 9 6 8 0 8 n e c o _ a _ 0 1 3 5 2 p d . / f b y g u e s t t o n 0 8 S e p e m b e r 2 0 2 3 Proof of the Second Inequality for SSh. Using the functions we defined for the previous proof and after computing the partial derivative of equation 4.78, we have 328 V. Liakoni, A. Modirshanechi, W. Gerstner, and J. Brea ∂ ∂δ (cid:8) ¯SSh( ˆθ , δ, σ C) = ∂ ∂δ log = d dδ log (cid:6) (cid:5) (cid:5) 1 + m ¯SBF( ˆθ , δ, s = −1, σ 1 + m ¯SBF( ˆθ , δ, s = +1, σ (cid:6) 1 + m f2(δ) . 1 + m f1(δ) C) C) (4.84) The derivative of the last term can be written in terms of the derivates of f1 and f2, indicated by f (cid:5) 2, respectively, 1 and f (cid:5) ∂ ∂δ (cid:8) ¯SSh( ˆθ , δ, σ C) = m f (cid:5) 2(δ) 1 + m f2(δ) − m f (cid:5) 1(δ) 1 + m f1(δ) (δ) 1 + m f2(δ) −m f (cid:5) (cid:8)(cid:7) = (cid:7) 1 + m f1(δ) (cid:8) + (cid:7) m2( f1 f (cid:5) 2 1 + m f1(δ) − f (cid:5) (cid:8)(cid:7) 1 f2)(δ) 1 + m f2(δ) (cid:8) . (4.85) The first term is always positive based on the proof for SBF. The second term is also always positive because ( f1 f (cid:5) 2 − f (cid:5) 1 f2)(δ) = f1(δ) f2(δ) (cid:5)(cid:5) δ σ 2 d − δ − ˆθ σ 2 n (cid:6) (cid:5) − δ σ 2 d − δ + ˆθ σ 2 n (cid:6)(cid:6) = f1(δ) f2(δ) 2 ˆθ σ 2 n > 0.
因此, 我们有
∂
∂δ
(西德:8) ¯SSh( ˆθ , δ, σ
C) > 0.
(4.86)
(西德:2)
4.9.3 Simulation Procedure for Prediction 1. In order to relax the main as-
sumption of our theoretical proofs (the belief is always a gaussian distribu-
的), to include the practical difficulties of a real experiment (例如, to use
|δt| ≈ δ instead of |δt| = δ), and to have an estimation of the effect size, 我们
also performed simulations for our first experimental prediction.
For each simulated subject, the procedure of our simulation was as
如下:
1. We fixed the hyperparameters σ 2 and pc for producing samples.
2. We selected a learning algorithm (例如, pf20) and fixed its correspond-
ing tuned parameters (based on our simulations in section 2).
3. We applied the learning algorithm over a sequence of observations
y1:时间 . Note that in a real experiment, this step can be done through
a few episodes, which makes it possible to have a long sequence of
observations (IE。, large T).
4. At each time t, we saved the values yt, ˆθt, ˆσt, δt, st, SSh(yt; ˆπ (t−1)), 和
SBF(yt; ˆπ (t−1)).
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
/
e
d
你
n
e
C
哦
A
r
t
我
C
e
–
p
d
/
我
F
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
C
哦
_
A
_
0
1
3
5
2
p
d
.
/
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
Learning in Volatile Environments With the Bayes Factor Surprise
329
然后, given an absolute prediction ˆθ > 0, an absolute prediction error
> 0, and a sign bias s ∈ {−1, 1}, we defined
δ > 0, a standard deviation σ
C
the set of time points,
T = {1 < t ≤ T : || ˆθt−1 | − ˆθ | < (cid:8)θ , ||δt| − δ| < (cid:8)δ, | ˆσt − σ C | < (cid:8)σ C , st = s}, (4.87) | ≈ ˆθ , |δt| ≈ δ, and ˆσt ≈ σ | − ˆθ| < (cid:8)θ , ||δt| − δ| < (cid:8)δ, and | ˆσt − σ C where || ˆθt−1 C are equivalent to | ˆθt−1 C are positive real values that should be determined based on practical limitations (mainly the length of the observation sequence T). We then computed the average surprise values as C, respectively. (cid:8)θ , (cid:8)δ, and (cid:8)σ | < (cid:8)σ ¯SBF( ˆθ , δ, s, σ C) = 1 |T | ¯SSh( ˆθ , δ, s, σ C) = 1 |T | (cid:11) t∈T (cid:11) t∈T SBF(yt; ˆπ (t−1)), SSh(yt; ˆπ (t−1)). (4.88) We repeated this procedure for N different simulated subjects (with differ- ent random seeds). The average of ¯SBF and ¯SSh over N = 20 subjects, for ∗ = 0.5, two learning algorithms (Nas12 (cid:8)θ = 0.25, (cid:8)δ = 0.1, and (cid:8)σ = 1 is shown in Figure 10B. The results are C the same as what our theoretical analysis predicted. and pf20), and for T = 500, ˆθ = 1, σ C 4.9.4 Simulation Procedure for Prediction 2. For our second prediction, the theoretical proof is trivial. However, in order to have a setting similar to ; ˆπ (t)) = p), a real experiment (e.g., to use P(yt+1 and to have an estimation of the effect size, we used simulations also for our second experimental predictions. ; ˆπ (t)) ≈ p instead of P(yt+1 We followed the same procedure as the one for the simulation of the first prediction. For each simulated subject and at each time t, we saved the ; ˆπ (0)), SSh(yt; ˆπ (t−1)), and SBF(yt; ˆπ (t−1)). Then, quantities P(yt+1 for a given a probability value p > 0, we defined the set of time points,
; ˆπ (t)), 磷(yt+1
T = {0 ≤ t ≤ T : |磷(yt+1
; ˆπ (t)) − p| < (cid:8)p, |P(yt+1
; ˆπ (0)) − p| < (cid:8)p},
(4.89)
; ˆπ (t)) ≈ p and P(yt+1
; ˆπ (t)) − p| < (cid:8)p and |P(yt+1
where |P(yt+1
; ˆπ (0)) − p| < (cid:8)p are equivalent
; ˆπ (0)) ≈ p, respectively. (cid:8)p is a positive real
to P(yt+1
value that should be determined based on practical limitations (mainly
the length of the observation sequence T). We then computed the aver-
age surprise ¯SBF(p) and ¯SSh(p) over T for each value of p. We repeated
this procedure for N different simulated subjects (with different random
seeds). The average of ¯SBF and ¯SSh over N = 20 subjects, for two learning
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
n
e
c
o
a
r
t
i
c
e
-
p
d
/
l
f
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
c
o
_
a
_
0
1
3
5
2
p
d
.
/
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
330
V. Liakoni, A. Modirshanechi, W. Gerstner, and J. Brea
and pf20), and for T = 500 and (cid:8)p = 0.0125 is shown in
∗
algorithms (Nas12
Figure 11B.
Appendix
A.1 Modified Algorithm of Nassar et al. (2012, 2010): Adaptation for
Gaussian Prior.
A.1.1 Recursive Update of the Estimated Mean for Gaussian Prior. Let us
first consider the case of a stationary regime (no change points) where ob-
served samples are drawn from a gaussian distribution with known vari-
|θ ∼ N (θ , σ 2), and the parameter θ is also drawn from a gaussian
ance, yt+1
distribution θ ∼ N (μ
, ..., yt+1, it
0
can be shown that, using Bayes’ rule, the posterior distribution P(θ |y1:t+1) =
π (t+1)
B
0 ). After having observed samples y1
(θ ) is
, σ 2
(cid:21)
(cid:21)
P(θ |y1:t+1) = N
θ ; μB,t+1
=
1
+ t+1
σ 2
1
σ 2
0
+
μ
0
σ 2
0
(cid:19)
(cid:22)
t+1
i=1 yi
σ 2
(cid:22)
, σ 2
B,t+1
=
1
+ t+1
σ 2
.
1
σ 2
0
(A.1)
An estimate of θ is its expected value E(θ |y1:t+1) = μB,t+1.
In a nonstationary regime where, after having observed y1
, . . . , yt from
the same hidden state, there is the possibility for a change point upon ob-
serving yt+1, the posterior distribution is
P(θ |y1:t+1) = (1 − γt+1)P(θ |y1:t+1
, ct+1
= 0) + γt+1P(θ |yt+1
, ct+1
= 1).
(A.2)
To facilitate notation, we denote ct+1
so that
= 0 as “stay” and ct+1
= 1 as “change”
P(θ |y1:t+1) = (1 − γt+1)P(θ |y1:t+1
, stay) + γt+1P(θ |yt+1
, change).
(A.3)
Note that the above is equivalent to the Bayesian recursive formula, equa-
tion 2.10 of the main text, where γt+1 is the adaptation rate we saw in equa-
tion 2.9 of the main text, and is essentially the probability to change given
= 1|y1:t+1). In Nassar et al. (2010) this quantity
the new observation, P(ct+1
is denoted as (cid:12)t+1. Taking equation A.1 into account, we have
E(θ |y1:t+1
, stay) = μB,t+1
=
1
σ 2
0
E(θ |y1:t+1
, change) =
1
+ 1
σ 2
1
σ 2
0
(cid:21)
μ
0
σ 2
0
(cid:19)
t+1
i=t+1−rt yi
σ 2
(cid:22)
,
+
(cid:6)
,
(A.4)
1
+ rt +1
σ 2
(cid:5)
μ
0
σ 2
0
+ yt+1
σ 2
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
n
e
c
o
a
r
t
i
c
e
-
p
d
/
l
f
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
c
o
_
a
_
0
1
3
5
2
p
d
.
/
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
Learning in Volatile Environments With the Bayes Factor Surprise
331
where rt is the time interval of observations coming from the same hidden
state, calculated at time t. Taking the expectation of equation A.3 the esti-
mated mean upon observing the new sample yt+1 is
ˆμt+1
= (1 − γ )
(cid:21)
+
μ
0
σ 2
0
(cid:22)
(cid:19)
t+1
i=t+1−rt yi
σ 2
+ γ
1
+ 1
σ 2
1
σ 2
0
(cid:5)
μ
0
σ 2
0
(cid:6)
,
+ yt+1
σ 2
1
+ rt +1
σ 2
1
σ 2
0
(A.5)
where we dropped the subscript t + 1 in γ to simplify notations. We have
ˆμt+1
= (1 − γ )
1
σ 2
0
+ γ
1
σ 2
0
1
+ 1
σ 2
(cid:21)
+
μ
0
σ 2
0
(cid:19)
t
i=t+1−rt yi
σ 2
+ yt+1
σ 2
(cid:22)
(cid:6)
.
+ yt+1
σ 2
(A.6)
1
+ rt +1
σ 2
(cid:5)
μ
0
σ 2
0
Because ˆμt = 1
+ rt
σ 2
1
σ 2
0
(cid:12)
+
μ
0
σ 2
0
(cid:19)
t
i=t+1−rt
σ 2
(cid:13)
yi
, after a few lines of algebra, we have
ˆμt+1
= (1 − γ ) ˆμt + γ μ
0
+ (1 − γ )
1
+ rt + 1
σ 2
σ 2
0
(yt+1
− ˆμt )
+ γ
1
+ 1
σ 2
σ 2
0
(yt+1
− μ
0).
We now define ρ = σ 2
σ 2
0
and find
ˆμt+1
= (1 − γ ) ˆμt + γ μ
0
+ (1 − γ )
1
ρ + rt + 1
(yt+1
− ˆμt )
+ γ
1
ρ + 1
(yt+1
− μ
0).
(A.7)
(A.8)
A rearrangement of the terms and inclusion of the dependency of γ on
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
n
e
c
o
a
r
t
i
c
e
-
p
d
/
l
f
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
c
o
_
a
_
0
1
3
5
2
p
d
.
/
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
time yields
(cid:5)
= (1 − γt+1)
ˆμt +
ˆμt+1
(cid:5)
+ γt+1
μ
0
+ 1
ρ + 1
1
ρ + rt + 1
(cid:6)
− ˆμt )
(yt+1
(cid:6)
(yt+1
− μ
0)
.
(A.9)
332
V. Liakoni, A. Modirshanechi, W. Gerstner, and J. Brea
In order to obtain a form similar to the one of Nassar et al. (2012, 2010)
we continue and spell out the terms that include the quantities ˆμt, μ
0, and
yt+1:
ˆμt+1
= (1 − γ ) ˆμt − (1 − γ )
1
ρ + rt + 1
ˆμt
+ γ μ
0
− γ
+ (1 − γ )
μ
0
1
ρ + 1
1
ρ + rt + 1
yt+1
+ γ
1
ρ + 1
.
yt+1
(A.10)
Using that
1
ρ+rt +1
= 1
ρ+1
−
rt
(ρ+1)(ρ+rt +1) , we have
ˆμt+1
= (1 − γ ) ˆμt − (1 − γ )
1
ρ + 1
ˆμt + (1 − γ )
rt
(ρ + 1)(ρ + rt + 1)
ˆμt
+ γ μ
0
− γ
μ
0
1
ρ + 1
1
ρ + 1
+ (1 − γ )
yt+1
− (1 − γ )
rt
(ρ + 1)(ρ + rt + 1)
yt+1
+ γ
1
ρ + 1
.
yt+1
(A.11)
After a further step of algebra, we arrive at
ˆμt+1
=
(cid:12)
ρ
ρ + 1
(cid:13)
(1 − γ ) ˆμt + γ μ
0
(cid:5)
+ 1
ρ + 1
(1 − γ )
rt
ρ + rt + 1
(cid:6)
( ˆμt − yt+1) + yt+1
.
(A.12)
If we define 1 − α = (1 − γ )
rt
ρ+rt +1
and rearrange the terms, we have
⇒ α = 1 − (1 − γ )
rt
ρ+rt +1
⇒ α = ρ+γ rt +1
ρ+rt +1
ˆμt+1
=
ˆμt+1
=
(cid:12)
(cid:12)
ρ
ρ + 1
ρ
ρ + 1
(cid:13)
(1 − γ ) ˆμt + γ μ
0
ˆμt + γ (μ
0
(cid:13)
− ˆμt )
+ 1
(cid:12)
(cid:13)
ρ + 1
(cid:12)
ˆμt + α(yt+1
(1 − α) ˆμt + αyt+1
(cid:13)
.
− ˆμt )
+ 1
ρ + 1
(A.13)
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
n
e
c
o
a
r
t
i
c
e
-
p
d
/
l
f
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
c
o
_
a
_
0
1
3
5
2
p
d
.
/
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
Learning in Volatile Environments With the Bayes Factor Surprise
333
Adding back the dependency of γ and α on time, we finally have
ˆμt+1
=
ρ
ρ + 1
(cid:12)
ˆμt + γt+1(μ
0
(cid:13)
− ˆμt )
+ 1
ρ + 1
(cid:12)
ˆμt + αt+1(yt+1
(cid:13)
.
− ˆμt )
(A.14)
A.1.2 Recursive Update of the Estimated Variance for Gaussian Prior. Nassar
= Var(θ |y1:t+1) and then, based
et al. (2012) first calculate the variance ˆσ 2
on this, compute ˆrt+1. We derive here these calculations for the case of gaus-
sian prior. We note again that
t+1
P(θ |y1:t+1) = (1 − γt+1)P(θ |y1:t+1
, stay) + γt+1P(θ |yt+1
, change).
(A.15)
Then for the variance ˆσ 2
t+1
= Var(θ |y1:t+1), we have
ˆσ 2
t+1
= (1 − γ )σ 2
stay
+ γ σ 2
change
+ (1 − γ )γ (μstay − μ
change)2
= (1 − γ )σ 2
B,t+1
+ γ σ 2
change
+ (1 − γ )γ (μB,t+1
− μ
change)2,
(A.16)
where σ 2
=
B,t+1
and σ 2
1
+ rt
We have already defined ρ = σ 2
σ 2
0
change
+1
σ 2
1
σ 2
0
= 1
1
σ 2
0
+ 1
σ 2
.
so that
A = (1 − γ )σ 2
B,t+1
+ γ σ 2
change
= (1 − γ )
σ 2
ρ + rt + 1
+ γ
σ 2
ρ + 1
,
(A.17)
Using, as before, that
1
ρ+rt +1
= 1
ρ+1
−
rt
(ρ+1)(ρ+rt +1) , we have
(cid:5)
A =
σ 2
ρ + 1
1 − (1 − γ )
(cid:6)
.
1
ρ + rt + 1
(A.18)
We already defined the learning rate α = 1 − (1 − γ )
rt
ρ+rt +1 , so we can write
A =
σ 2
ρ + 1
α.
Note that μt = 1
+ rt
σ 2
1
σ 2
0
we have
(cid:12)
+
μ
0
σ 2
0
(cid:19)
t
i=t+1−rt
σ 2
(cid:13)
yi
so for the calculation of the last term,
(A.19)
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
n
e
c
o
a
r
t
i
c
e
-
p
d
/
l
f
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
c
o
_
a
_
0
1
3
5
2
p
d
.
/
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
B = μB,t+1
− μ
change
(cid:5)
1
+ rt +1
σ 2
=
1
σ 2
0
(cid:19)
t+1
i=t+1−rt yi
σ 2
(cid:6)
−
+
(cid:5)
μ
0
σ 2
0
(cid:6)
.
+ yt+1
σ 2
1
+ 1
σ 2
1
σ 2
0
(A.20)
μ
0
σ 2
0
334
V. Liakoni, A. Modirshanechi, W. Gerstner, and J. Brea
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
n
e
c
o
a
r
t
i
c
e
-
p
d
/
l
f
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
c
o
_
a
_
0
1
3
5
2
p
d
.
/
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
We now rearrange the terms
(cid:5)
B = μt +
1
ρ + 1
−
rt
(ρ + 1)(ρ + rt + 1)
(cid:6)
(yt+1
− μt ) − μ
0
− 1
ρ + 1
(yt+1
− μ
0),
and finally we have
ˆσ 2
t+1
=
σ 2
ρ + 1
α + (1 − γ )γ B2.
(A.21)
(A.22)
A.2 Implementation of Nas10∗, Nas12∗, and Particle Filtering with
One Particle for the Gaussian Estimation Task. We provide here the pseu-
∗
docode for the algorithms Nas10
, and Particle Filtering with one
, Nas12
particle for the gaussian estimation task (see algorithms A1, A2, and A3).
Observations are drawn from a gaussian distribution with known vari-
, σ 2) and θt = μt. When there
ance and unknown mean: yt+1
is a change, the parameter μ is also drawn from a gaussian distribution,
μ ∼ N (μ
0 ). All three algorithms estimate the expected μt+1 (i.e., ˆμt+1)
0
upon observing a new sample yt+1.
∼ N (μt+1
|μt+1
, σ 2
∗
After rewriting equation 4.56, we have
(cid:12)
(cid:13)
SBF
yt+1
; ˆμt, ˆσt
=
(cid:14)
exp
−
σ 2 + ˆσ 2
t
σ 2 + σ 2
0
μ2
0
2σ 2
0
− ˆμ2
t
2 ˆσ 2
t
+
ˆμt
ˆσ 2
t
2(
+ yt+1
σ 2
+ 1)
σ 2
ˆσ 2
t
+
(cid:15)
.
μ
0
σ 2
0
2(
+ yt+1
σ 2
+ 1)
σ 2
σ 2
0
(A.23)
Learning in Volatile Environments With the Bayes Factor Surprise
335
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
n
e
c
o
a
r
t
i
c
e
-
p
d
/
l
f
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
c
o
_
a
_
0
1
3
5
2
p
d
.
/
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
Note that the pseudocode for pf1 provided here is a translation of al-
gorithm 3 to the case of a single particle (where there are no weights to
calculate) and for the gaussian distribution as a particular instance of the
exponential family.
336
V. Liakoni, A. Modirshanechi, W. Gerstner, and J. Brea
Acknowledgments
We thank both reviewers for their constructive and helpful comments. This
research was supported by the Swiss National Science Foundation (200020
184615) and the European Union Horizon 2020 Framework Program under
grant agreement 785907 (Human Brain Project, SGA2).
References
Adams, R. P., & MacKay, D. J. (2007). Bayesian online changepoint detection.
arXiv:0710.3742.
Aminikhanghahi, S., & Cook, D. J. (2017). A survey of methods for time series change
point detection. Knowledge and Information Systems, 51(2), 339–367.
Barber, D. (2006). Expectation correction for smoothed inference in switching linear
dynamical systems. Journal of Machine Learning Research, 7, 2515–2540.
Barber, D. (2012). Bayesian reasoning and machine learning. Cambridge: Cambridge
University Press.
Beal, M. J. (2003). Variational algorithms for approximate Bayesian inference. Ph.D. diss.,
University College London.
Behrens, T. E., Woolrich, M. W., Walton, M. E., & Rushworth, M. F. (2007). Learning
the value of information in an uncertain world. Nature Neuroscience, 10(9), 1214.
Bogacz, R. (2017). A tutorial on the free-energy framework for modelling perception
and learning. Journal of Mathematical Psychology, 76, 198–211.
Bogacz, R. (2019). Dopamine role in learning and action inference. bioRxiv:837641.
Boyd, S., & Vandenberghe, L. (2004). Convex optimization. Cambridge: Cambridge
University Press.
Brown, S. D., & Steyvers, M. (2009). Detecting and predicting changes. Cognitive Psy-
chology, 5(1), 49–67.
Cummings, R., Krehbiel, S., Mei, Y., Tuo, R., & Zhang, W. (2018). Differentially pri-
vate change-point detection. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman,
N. Cesa-Bianchi, & R. Garnett (Eds.), Advances in neural information processing sys-
tems, 31 (pp. 10825–10834). Red Hook, NY: Curran.
d’Acremont, M., & Bossaerts, P. (2016). Neural mechanisms behind identification of
leptokurtic noise and adaptive behavioral response. Cerebral Cortex, 26(4), 1818–
1830.
Daw, N., & Courville, A. (2008). The pigeon as particle filter. In J. C. Platt, D. Koller,
Y. Singer, & S. T. Roweis (Eds.), Advances in neural information processing systems,
20 (pp. 369–376). Red Hook, NY: Curran.
Doucet, A., Godsill, S., & Andrieu, C. (2000). On sequential Monte Carlo sampling
methods for Bayesian filtering. Statistics and Computing, 10(3), 197–208.
Doucet, A., & Johansen, A. M. (2009). A tutorial on particle filtering and smooth-
ing: Fifteen years later. In D. Crisan & B. Rozovskii (Eds.), The Oxford handbook of
nonlinear filtering (pp. 656–704), 3. Oxford: Oxford University Press.
Doucet, A., & Tadi´c, V. B. (2003). Parameter estimation in general state-space models
using particle methods. Annals of the Institute of Statistical Mathematics, 55(2), 409–
422.
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
n
e
c
o
a
r
t
i
c
e
-
p
d
/
l
f
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
c
o
_
a
_
0
1
3
5
2
p
d
.
/
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
Learning in Volatile Environments With the Bayes Factor Surprise
337
Efron, B., & Hastie, T. (2016). Computer age statistical inference. Cambridge: Cambridge
University Press.
Faraji, M., Preuschoff, K., & Gerstner, W. (2018). Balancing new against old infor-
mation: The role of puzzlement surprise in learning. Neural Computation, 30(1),
34–83.
Fearnhead, P., & Liu, Z. (2007). On-line inference for multiple changepoint problems.
Journal of the Royal Statistical Society: Series B (Statistical Methodology), 69(4), 589–
605.
Findling, C., Chopin, N., & Koechlin, E. (2019). Imprecise neural computations as source
of human adaptive behavior in volatile environments. bioRxiv:799239.
Fox, E., Sudderth, E. B., Jordan, M. I., & Willsky, A. S. (2011). Bayesian nonparametric
inference of switching dynamic linear models. IEEE Transactions on Signal Process-
ing, 59(4), 1569–1585.
Frémaux, N., & Gerstner, W. (2016). Neuromodulated spike-timing-dependent plas-
ticity, and theory of three-factor learning rules. Frontiers in Neural Circuits, 9, 85.
Friston, K. (2010). The free-energy principle: A unified brain theory? Nature Reviews
Neuroscience, 1(2), 127.
Friston, K., FitzGerald, T., Rigoli, F., Schwartenbeck, P., & Pezzulo, G. (2017). Active
inference: A process theory. Neural Computation, 29(1), 1–49.
George, C. P., & Doss, H. (2017). Principled selection of hyperparameters in the latent
Dirichlet allocation model. Journal of Machine Learning Research, 18, 1–38.
Gershman, S. J. (2019). What does the free energy principle tell us about the brain?
arXiv:1901/07945.
Gershman, S. J., Monfils, M.-H., Norman, K. A., & Niv, Y. (2017). The computational
nature of memory modification. Elife, 6, e23763.
Gershman, S. J., Radulescu, A., Norman, K. A., & Niv, Y. (2014). Statistical computa-
tions underlying the dynamics of memory updating. PLOS Computational Biology,
10(11), e1003939.
Gerstner, W., Lehmann, M., Liakoni, V., Corneil, D., & Brea, J. (2018). Eligibility traces
and plasticity on behavioral time scales: Experimental support of neo-Hebbian
three-factor learning rules. Frontiers in Neural Circuits, 12.
Ghahramani, Z., & Hinton, G. E. (2000). Variational learning for switching state-
space models. Neural Computation, 12(4), 831–864.
Glaze, C. M., Kable, J. W., & Gold, J. I. (2015). Normative evidence accumulation in
unpredictable environments. Elife, 4, e08825.
Gordon, N.
J., Salmond, D.
(1993). Novel approach to
J., & Smith, A. F.
nonlinear/non-gaussian Bayesian state estimation. In IEE Proceedings (Radar and
Signal Processing), 140, 107–113.
Heilbron, M., & Meyniel, F. (2019). Confidence resets reveal hierarchical adaptive
learning in humans. PLOS Computational Biology, 15(4), e1006972.
Huang, Y., & Rao, R. P. (2014). Neurons as Monte Carlo samplers: Bayesian inference
and learning in spiking networks. In Z. Ghahramani, M. Welling, C. Cortes, N.
Lawrence, & K. Q. Weinberger (Eds.), Advances in neural information processing
systems, 27 (pp. 1943–1951). Red Hook, NY: Curran.
Huettel, S. A., Mack, P. B., & McCarthy, G. (2002). Perceiving patterns in random
series: Dynamic processing of sequence in prefrontal cortex. Nature Neuroscience,
5(5), 485–490.
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
n
e
c
o
a
r
t
i
c
e
-
p
d
/
l
f
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
c
o
_
a
_
0
1
3
5
2
p
d
.
/
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
338
V. Liakoni, A. Modirshanechi, W. Gerstner, and J. Brea
Itti, L., & Baldi, P. F. (2006). Bayesian surprise attracts human attention. In Y. Weiss,
B. Schölkopf, & J. Platt (Eds.), Advances in neural information processing systems, 18
(pp. 547–554). Cambridge, MA: MIT Press.
Joshi, S., & Gold, J. I. (2020). Pupil size as a window on neural substrates of cognition.
Trends in Cognitive Sciences, 24.
Kass, R. E., & Raftery, A. E. (1995). Bayes factors. Journal of the American Statistical
Association, 9(430), 773–795.
Konovalov, A., & Krajbich, I. (2018). Neurocomputational dynamics of sequence
learning. Neuron, 98(6), 1282–1293.
Kopp, B., & Lange, F. (2013). Electrophysiological indicators of surprise and entropy
in dynamic task-switching environments. Frontiers in Human Neuroscience, 7,
300.
Kutschireiter, A., Surace, S. C., Sprekeler, H., & Pfister, J.-P. (2017). Nonlinear
Bayesian filtering and learning: A neuronal dynamics for perception. Scientific
Reports, 7(1), 8722.
Legenstein, R., & Maass, W. (2014). Ensembles of spiking neurons with noise support
optimal probabilistic inference in a dynamically changing environment. PLOS
Computational Biology, 10(10).
Lieder, F., Daunizeau, J., Garrido, M. I., Friston, K. J., & Stephan, K. E. (2013). Mod-
elling trial-by-trial changes in the mismatch negativity. PLOS Computational Biol-
ogy, 9(2).
Lin, K., Sharpnack, J. L., Rinaldo, A., & Tibshirani, R. J. (2017). A sharp error analysis
for the fused lasso, with application to approximate changepoint screening. In
I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, &
R. Garnett (Eds.), Advances in neural information processing systems, 30 (pp. 6884–
6893). Red Hook, NY: Curran.
Lisman, J., Grace, A. A., & Duzel, E. (2011). A neohebbian framework for episodic
memory; role of dopamine-dependent late LTP. Trends in Neurosciences, 34(10),
536–547.
Liu, J., & West, M. (2001). Combined parameter and state estimation in simulation-
based filtering. In A. Doucet, N. de Freitas, & N. Gordon (Eds.), Sequential Monte
Carlo methods in practice (pp. 197–223). Berlin: Springer.
Lomonaco, V., Desai, K., Culurciello, E., & Maltoni, D. (2019). Continual reinforcement
learning in 3D non-stationary environments. arXiv:1905.1011.
Loued-Khenissi, L., Pfeuffer, A., Einhäuser, W., & Preuschoff, K. (2020). Anterior in-
sula reflects surprise in value-based decision-making and perception. NeuroIm-
age, 116549.
Maheu, M., Dehaene, S., & Meyniel, F. (2019). Brain signatures of a multiscale process
of sequence learning in humans. Elife, 8, e41541.
Mars, R. B., Debener, S., Gladwin, T. E., Harrison, L. M., Haggard, P., Rothwell, J. C.,
& Bestmann, S. (2008). Trial-by-trial fluctuations in the event-related electroen-
cephalogram reflect dynamic changes in the degree of surprise. Journal of Neuro-
science, 28(47), 12539–12545.
Masegosa, A., Nielsen, T. D., Langseth, H., Ramos-López, D., Salmerón, A., & Mad-
sen, A. L. (2017). Bayesian models of data streams with hierarchical power priors.
In Proceedings of the 34th International Conference on Machine Learning, vol. 70 (pp.
2334–2343). PMLR.
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
n
e
c
o
a
r
t
i
c
e
-
p
d
/
l
f
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
c
o
_
a
_
0
1
3
5
2
p
d
.
/
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
Learning in Volatile Environments With the Bayes Factor Surprise
339
Mathys, C., Daunizeau, J., Friston, K. J., & Stephan, K. E. (2011). A Bayesian founda-
tion for individual learning under uncertainty. Frontiers in Human Neuroscience,
5, 39.
Meyniel, F., Maheu, M., & Dehaene, S. (2016). Human inferences about sequences:
A minimal transition probability model. PLOS Computational Biology, 12(12),
e1005260.
Modirshanechi, A., Kiani, M. M., & Aghajan, H. (2019). Trial-by-trial surprise-
decoding model for visual and auditory binary oddball tasks. NeuroImage, 196,
302–317.
Musiolek, L., Blankenburg, F., Ostwald, D., & Rabovsky, M. (2019). Modeling the
n400 brain potential as semantic Bayesian surprise. In Proceedings of the 2019
Conference on Cognitive Computational Neuroscience. https://ccneuro.org/2019/
Papers/AcceptedPapers.asp
Nagabandi, A., Clavera, I., Liu, S., Fearing, R. S., Abbeel, P., Levine, S., & Finn,
C. (2018). Learning to adapt in dynamic, real-world environments through meta-
reinforcement learning. arXiv:1803.11347.
Nassar, M. R., Bruckner, R., & Frank, M. J. (2019). Statistical context dictates the re-
lationship between feedback-related EEG signals and learning. Elife, 8, e46975.
Nassar, M. R., Rumsey, K. M., Wilson, R. C., Parikh, K., Heasly, B., & Gold, J. I. (2012).
Rational regulation of learning dynamics by pupil-linked arousal systems. Nature
Neuroscience, 15(7), 1040–1046.
Nassar, M. R., Wilson, R. C., Heasly, B., & Gold, J. I. (2010). An approximately
Bayesian delta-rule model explains the dynamics of belief updating in a changing
environment. Journal of Neuroscience, 30(37), 12366–12378.
Ostwald, D., Spitzer, B., Guggenmos, M., Schmidt, T. T., Kiebel, S. J., & Blankenburg,
F. (2012). Evidence for neural encoding of Bayesian surprise in human somatosen-
sation. NeuroImage, 62(1), 177–188.
Özkan, E., Šmídl, V., Saha, S., Lundquist, C., & Gustafsson, F. (2013). Marginal-
ized adaptive particle filtering for nonlinear models with unknown time-varying
noise parameters. Automatica, 49(6), 1566–1575.
Papoulis, A., & Saunders, H. (1989). Probability, random variables and stochastic pro-
cesses. American Society of Mechanical Engineers Digital Collection.
Prat-Carrabin, A., Wilson, R. C., Cohen, J. D., & Da Silveira, R. A. (2020). Human
inference in changing environments with temporal structure. bioRxiv:720516.
Preuschoff, K., t Hart, B. M., & Einhauser, W. (2011). Pupil dilation signals surprise:
Evidence for noradrenaline’s role in decision making. Frontiers in Neuroscience, 5,
115.
Ryali, C., Reddy, G., & Yu, A. J. (2018). Demystifying excessively volatile human
learning: A Bayesian persistent prior and a neural approximation. In S. Bengio,
H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, & R. Garnett, Advances
in neural information processing systems, 31 (pp. 2781–2790). Red Hook, NY: Curran.
Särkkä, S. (2013). Bayesian filtering and smoothing (Vol. 3). Cambridge: Cambridge Uni-
versity Press.
Schmidhuber, J. (2010). Formal theory of creativity, fun, and intrinsic motivation
(1990–2010). IEEE Transactions on Autonomous Mental Development, 2(3), 230–247.
Schwartenbeck, P., FitzGerald, T., Dolan, R., & Friston, K. (2013). Exploration, nov-
elty, surprise, and free energy minimization. Frontiers in Psychology, 4, 710.
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
n
e
c
o
a
r
t
i
c
e
-
p
d
/
l
f
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
c
o
_
a
_
0
1
3
5
2
p
d
.
/
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
340
V. Liakoni, A. Modirshanechi, W. Gerstner, and J. Brea
Shannon, C. (1948). A mathematical theory of communication. Bell System Technical
Journal, 20, 623–656; 27, 379-423.
Shi, L., & Griffiths, T. L. (2009). Neural implementation of hierarchical Bayesian
inference by importance sampling. In Y. Bengio, D, Schuurmans, J. Lafferty, C.
Williams, & A. Culotta (Eds.), Advances in neural information processing systems, 22
(pp. 1669–1677). Red Hook, NY: Curran.
Storck, J., Hochreiter, S., & Schmidhuber, J. (1995). Reinforcement driven information
acquisition in non-deterministic environments. In Proceedings of the International
Conference on Artificial Neural Networks (Vol. 2, pp. 159–164). Piscataway, NJ: IEEE.
Traoré, R., Caselles-Dupré, H., Lesort, T., Sun, T., Cai, G., Díaz-Rodríguez, N., &
Filliat, D. (2019). Discorl: Continual reinforcement learning via policy distillation.
arXiv:1907.05855.
Wilson, R. C., Nassar, M. R., & Gold, J. I. (2010). Bayesian online learning of the
hazard rate in change-point problems. Neural Computation, 22(9), 2452–2476.
Wilson, R. C., Nassar, M. R., & Gold, J. I. (2013). A mixture of delta-rules approxima-
tion to Bayesian inference in change-point problems. PLOS Computational Biology,
9(7), e1003150.
Yu, A. J. (2012). Change is in the eye of the beholder. Nature Neuroscience, 15(7), 933.
Yu, A. J., & Cohen, J. D. (2009). Sequential effects: Superstition or rational behavior?
In D. Koller, D. Schuurmans, Y. Bengio, & L. Bottou (Eds.), Advances in neural
information processing systems, 21 (pp. 1873–1880). Red Hook, NY: Curran.
Yu, A. J., & Dayan, P. (2005). Uncertainty, neuromodulation, and attention. Neuron,
46(4), 681–692.
Received April 27, 2020; accepted September 8, 2020.
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
n
e
c
o
a
r
t
i
c
e
-
p
d
/
l
f
/
/
/
/
3
3
2
2
6
9
1
8
9
6
8
0
8
n
e
c
o
_
a
_
0
1
3
5
2
p
d
.
/
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3