LETTER

LETTER

Communicated by Iris Groen

Temporal Variabilities Provide Additional Category-Related
Information in Object Category Decoding: A Systematic
Comparison of Informative EEG Features

Hamid Karimi-Rouzbahani
hamid.karimi-rouzbahani@mrc-cbu.cam.ac.uk
Medical Research Council Cognition and Brain Sciences Unit, Université de
Cambridge, Cambridge CB2 7EF, U.K.; Perception in Action Research Centre
and Department of Cognitive Science; and Department of Computing,
Macquarie University, NSW 2109, Australia

Mozhgan Shahmohammadi
mozhganshahmohamadi1368@gmail.com
Department of Computer Engineering, Central Tehran Branch, Islamic Azad
University, Tehran 1584743311, Iran

Ehsan Vahab
ehsan.vahab@gmail.com
Department of Computer and Information and Technology Engineering,
Qazvin Branch, Islamic Azad University, Qazvin 341851416, Iran

Saeed Setayeshi
setayesh@aut.ac.ir
Department of Medical Radiation Engineering, Amirkabir University of Technology,
Tehran 1591634311, Iran

Thomas Carlson
thomas.carlson@sydney.edu.au
School of Psychology, University of Sydney, NSW 2006, Australia, and Perception
in Action Research Centre and Department of Cognitive Science,
Macquarie University, NSW 2109, Australia

How does the human brain encode visual object categories? Our under-
standing of this has advanced substantially with the development of
multivariate decoding analyses. Cependant, conventional electroen-
cephalography (EEG) decoding predominantly uses the mean neural
activation within the analysis window to extract category information.
Such temporal averaging overlooks the within-trial neural variability
that is suggested to provide an additional channel for the encoding of in-
formation about the complexity and uncertainty of the sensory input. Le
richness of temporal variabilities, cependant, has not been systematically

Neural Computation 33, 3027–3072 (2021) © 2021 Massachusetts Institute of Technology.
https://doi.org/10.1162/neco_a_01436
Publié sous Creative Commons
Attribution 4.0 International (CC PAR 4.0) Licence.

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

/

e
d
toi
n
e
c
o
un
r
t
je
c
e

p
d

/

je

F
/

/

/

/

3
3
1
1
3
0
2
7
1
9
6
6
5
9
2
n
e
c
o
_
un
_
0
1
4
3
6
p
d

.

/

F

b
oui
g
toi
e
s
t

t

o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3

3028

H. Karimi-Rouzbahani et al.

compared with the conventional mean activity. Here we compare the in-
formation content of 31 variability-sensitive features against the mean of
activité, using three independent highly varied data sets. In whole-trial
decoding, the classical event-related potential (ERP) components of P2a
and P2b provided information comparable to those provided by original
magnitude data (OMD) and wavelet coefficients (WC), the two most
informative variability-sensitive features. In time-resolved decoding,
the OMD and WC outperformed all the other features (including the
mean), which were sensitive to limited and specific aspects of temporal
variabilities, such as their phase or frequency. The information was
more pronounced in the theta frequency band, previously suggested
to support feedforward visual processing. We concluded that the brain
might encode the information in multiple aspects of neural variabilities
simultaneously such as phase, amplitude, and frequency rather than
mean per se. In our active categorization data set, we found that more
effective decoding of the neural codes corresponds to better prediction
of behavioral performance. Donc, the incorporation of temporal
variabilities in time-resolved decoding can provide additional category
information and improved prediction of behavior.

1 Introduction

How does the brain encode information about visual object categories? Ce
question has been studied for decades using different neural recording tech-
niques, including invasive neurophysiology (Hung, Kreiman, Poggio, &
DiCarlo, 2005) and electrocorticography (ECoG); Majima et al., 2014;
Watrous, Tremper, Fell, & Axmacher, 2015; Rupp et al., 2017; Miyakawa
et coll., 2018; Liu, Agam, Madsen, & Kreiman, 2009), as well as noninva-
sive neuroimaging methods such as functional magnetic resonance imag-
ing (IRMf; Haxby et al., 2001), magnetoencephalography (MEG; Contini,
Wardle, & Carlson, 2017; Carlson, Tovar, Alink, & Corps de guerre, 2013)
and electroencephalography (EEG; Kaneshiro, Guimaraes, Kim, Norcia, &
Suppes, 2015; Simanova, Van Gerven, Oostenveld, & Hagoort, 2010) or a
combination of them (Calme, Pantazis, & Oliva, 2014). There has been great
success in reading out or decoding neural representations of semantic object
categories from neuroimaging data. Cependant, it is still unclear if the con-
ventional decoding analyses effectively detect the complex neural codes.
Critique, one potential source of neural codes in high-temporal-resolution
data (par exemple., EEG) can be the “within-trial/window temporal variability” of
EEG signals, which is generally ignored through temporal averaging in de-
coding. The use of such summarized “mean” activity can hide the true spa-
tiotemporal dynamics of neural processes such as object category encoding,
which is still debated in cognitive neuroscience (Grootswagers, Robinson, &

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

/

e
d
toi
n
e
c
o
un
r
t
je
c
e

p
d

/

je

F
/

/

/

/

3
3
1
1
3
0
2
7
1
9
6
6
5
9
2
n
e
c
o
_
un
_
0
1
4
3
6
p
d

.

/

F

b
oui
g
toi
e
s
t

t

o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3

Additional Information in Temporal Variability of Evoked Potentials

3029

Carlson, 2019; Majima et al., 2014; Karimi-Rouzbahani, Bagheri, & Ebrahim-
pour, 2017b; Isik, Meyers, Leibo, & Poggio, 2014; Cichy et al., 2014; ; Karimi-
Rouzbahani, 2018).

Ici, we quantitatively compare the information content and the tempo-
ral dynamics of a large set of features from EEG time series, each sensitive
to a specific aspect of within-trial temporal variability. We then evaluate the
relevance of these features by measuring how well each one predicts behav-
ioral performance. Sensory neural codes are multiplexed structures contain-
ing information on different timescales and about different aspects of the
sensory input (Panzeri, Brunel, Logothetis, & Kayser, 2010; Wark, Fairhall,
& Rieke, 2009; Gawne, Kjaer, & Richmond, 1996). Previous animal stud-
ies have shown that the brain encodes the sensory information not only in
the neural firing rates (c'est à dire., average number of neural spikes within specific
time windows), but also in more complex patterns of neural activity, tel
as millisecond-precise activity and phase (Kayser, Montemurro, Logothetis,
& Panzeri, 2009; Victor, 2000; Montemurro, Rasch, Murayama, Logothetis,
& Panzeri, 2008). It was shown that stimulus contrast was represented by
latency coding at a temporal precision of about 10 ms, whereas the stimulus
orientation and the spatial frequency were encoded at a coarser temporal
precision (30 ms and 100 ms, respectivement; Victor, 2000). It was shown that
spike rates on 5 ms to 10 ms timescales carried complementary information
to the phase of firing relative to low-frequency (1–8 Hz) local field potentials
(LFPs) about epoch of naturalistic movie (Montemurro et al., 2008). Là-
fore, the temporal patterns and variabilities of neural activity are enriched
platforms of neural codes.

Recent computational and experimental studies have proposed that
neural variability provides a separate and additional channel to the mean
activity for the encoding of general aspects of the sensory information—
Par exemple, its “uncertainty” and “complexity” (Orbán, Berkes, Fiser, &
Lengyel, 2016; Garrett, Epp, Kleemeyer, Lindenberger, & Polk, 2020). Specif-
ically, uncertainty about the stimulus features (par exemple., orientations of lines in
the image) was directly linked to neural variability in monkeys’ visual area
(Orbán et al., 2016) and human EEG (Kosciessa, Lindenberger, & Garrett,
2021): wider inferred range of possible feature combinations in the input
stimulus corresponded to wider distribution of neural responses. Ce
could be applied to both within- and across-trial variability (Orbán et al.,
2016). De plus, temporal variability was directly related to the complex-
ity of input images: higher neural variability for house (c'est à dire., more varied)
versus face (c'est à dire., less varied) images (Garrett et al., 2020) and provided a
reliable measure of perceptual performance in behavior (Waschke, Tune,
& Obleser, 2019). The uncertainty- and complexity-dependent modulation
of neural variability, which is linked to the category of input information,
has been suggested to facilitate neural energy saving and adaptive and
effective encoding of the sensory inputs in changing environments (Garrett
et coll., 2020; Waschke, Kloosterman, Obleser, & Garrett, 2021).

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

/

e
d
toi
n
e
c
o
un
r
t
je
c
e

p
d

/

je

F
/

/

/

/

3
3
1
1
3
0
2
7
1
9
6
6
5
9
2
n
e
c
o
_
un
_
0
1
4
3
6
p
d

.

/

F

b
oui
g
toi
e
s
t

t

o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3

3030

H. Karimi-Rouzbahani et al.

Despite the richness of information encoded by neural variabilities, le
unclear transformation of such neuronal codes into EEG activity has led
to divergent approaches used for decoding information from EEG. Pour
exemple, the information in neural firing rates might appear in phase
patterns rather than amplitude of EEG oscillations (Ng, Logothetis, &
Kayser, 2013). Generally three families of features have been extracted
from EEG time series to detect neural codes from temporal variabilities
(Waschke et al., 2021): variance-, frequency- and information theory-based
features, each detecting specific aspects of variability. In whole-trial decod-
ing, components of event-related potentials (ERPs) such as N1, P1, P2a,
and P2b, which quantify time-specific variabilities of within-trial activa-
tion, have provided significant information about object categories (sep-
arately and in combination; Chan, Halgren, Marinkovic, & Espèces, 2011;
Wang, Xiong, Hu, Yao, & Zhang, 2012; Qin et al., 2016). Others success-
fully decoded information from more complex variance- and frequency-
based features such as signal phase (Behroozi, Daliri, & Shekarchi, 2016;
Watrous, Tremper, Fell, & Axmacher, 2015; Torabi, Jahromy, & Daliri, 2017;
Wang, Wang, & Yu, 2018; Voloh, Oemisch, & Womelsdorf, 2020), signal
power across frequency bands (Rupp et al., 2017; Miyakawa et al., 2018; Ma-
jima et al., 2014), time-frequency wavelet coefficients (Hatamimajoumerd
& Talebpour, 2019; Taghizadeh-Sarabi, Daliri, & Niksirat, 2015), interelec-
trode temporal correlations (Karimi-Rouzbahani, Bagheri, & Ebrahimpour,
2017un), and information-based features (par exemple., entropy; Joshi, Panigrahi,
Anand, & Santhosh, 2018; Torabi et al., 2017; Stam, 2005). Donc, le
neural codes are generally detected from EEG activity using a wide range
of features sensitive to temporal variability.

While insightful, previous studies have also posed new questions about
the relative richness, temporal dynamics, and behavioral relevance of differ-
ent features of neural variability. D'abord, can the features sensitive to temporal
variabilities provide additional category information to the conventional
mean feature? While several of the above studies have compared multi-
ple features (Chan et coll., 2011; Taghizadeh-Sarabi et al., 2015; Torabi et al.,
2016), none of them compared their results against the conventional mean
activité, which is the dominant feature, especially in time-resolved decod-
ing (Grootswagers, Wardle, & Carlson, 2017). This comparison will not only
validate the richness of each feature of neural variability but will also show
if the mean activity detects a large portion of the neural codes produced
by the brain. We predicted that the informative neural variabilities, if prop-
erly decoded, should provide additional information to the mean activity,
which overlook the analysis window.

Deuxième, do the features sensitive to temporal variabilities evolve over
similar time windows to the “mean” feature? Among all the studies men-
tioned above, only a few investigated the temporal dynamics of features,
other than the mean in time-resolved decoding (Majima et al., 2014; Stewart,
Nuthmann, & Sanguinetti, 2014; Karimi-Rouzbahani et al., 2017un), où

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

/

e
d
toi
n
e
c
o
un
r
t
je
c
e

p
d

/

je

F
/

/

/

/

3
3
1
1
3
0
2
7
1
9
6
6
5
9
2
n
e
c
o
_
un
_
0
1
4
3
6
p
d

.

/

F

b
oui
g
toi
e
s
t

t

o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3

Additional Information in Temporal Variability of Evoked Potentials

3031

the temporal evolution of information encoding is studied (Grootswagers
et coll., 2017). As distinct aspects of sensory information (par exemple., contrast ver-
sus spatial frequency) are represented on different temporal scales (Victor,
2000; Montemurro et al., 2008) and different variability features are poten-
tially sensitive to distinct aspects of variability, we might see differential
temporal dynamics for different features.

Troisième, do the features sensitive to temporal variabilities explain the
behavioral recognition performance more accurately than the mean fea-
ture? One important question, which was not covered in the above studies,
was whether the extracted information was behaviorally relevant or just
epiphenomenal to the experimental conditions. One way of validating the
relevance of the extracted neural codes is to check if they could predict
the relevant behavior (Williams, Dang, & Kanwisher, 2007; Grootswagers,
Calme, & Carlson, 2018; Woolgar, Dermody, Afshar, Williams, & Rich, 2019).
We previously found that the decoding accuracies obtained from mean
signal activations could predict the behavioral recognition performance
(Ritchie, Tovar, & Carlson, 2015). Cependant, it remains unknown whether
(if at all) the information obtained from temporal variabilities can explain
more variance of the behavioral performance. Our prediction was that as
the more informative features access more of the potentially overlooked
neural codes, they should also explain the behavioral performance more
accurately.

Dans cette étude, we address the above questions to provide additional in-
sights into what aspects of neural variabilities might reflect the neural codes
more thoroughly and how we can extract them most effectively using mul-
tivariate decoding analyses.

2 Methods

The data sets used in this study and the code are available online at
https://osf.io/wbvpn/. The EEG and behavioral data are available in
Matlab .mat format and the code in Matlab .m format. All the open-source
scripts used in this study were compared against other implementations of
identical algorithms in simulations and used only if they produced iden-
tical results. All open-source implementation scripts of similar algorithms
produced identical results in our simulations. To evaluate different imple-
mentations, we tested them using 1000 random (normally distributed with
unit variance and zero mean) time series, each including 1000 samples.

2.1 Overview of Data Sets. We chose three previously published EEG
data sets in this study which differed across a wide range of parameters in-
cluding the recording setup (par exemple., amplifier, nombre d'électrodes, prepro-
cessing steps), characteristics of the image-set (par exemple., number of categories
and exemplars within each category, colorfulness of images), and task (par exemple.,
presentation length, order and the participants’ task; see Table 1). All three

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

/

e
d
toi
n
e
c
o
un
r
t
je
c
e

p
d

/

je

F
/

/

/

/

3
3
1
1
3
0
2
7
1
9
6
6
5
9
2
n
e
c
o
_
un
_
0
1
4
3
6
p
d

.

/

F

b
oui
g
toi
e
s
t

t

o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3

3032

H. Karimi-Rouzbahani et al.

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

/

e
d
toi
n
e
c
o
un
r
t
je
c
e

p
d

/

je

F
/

/

/

/

3
3
1
1
3
0
2
7
1
9
6
6
5
9
2
n
e
c
o
_
un
_
0
1
4
3
6
p
d

.

/

e
je
un
m
7

e
je
un
m
e
F

3

e
je
un
m
6

e
je
un
m
e
F

4

e
je
un
m
7

e
je
un
m
e
F

3

r
e
d
n
e
G

)
n
un
je
d
e
m

(

oui
c
un
r
toi
c
c
UN

k
s
un
T

)
oui
r
e
h
p
je
r
e
P.
(

'
s
t
n
un
p
je
c
je
t
r
un
P.

'
s
t
n
un
p
je
c
je
t
r
un
P.

e
g
UN

'
s
t
n
un
p
je
c
je
t
r
un
P.

je

s
toi
toi
m

je
t
S

je

s
toi
toi
m

je
t
S

e
z
je
S

n
o
je
t
un
t
n
e
s
e
r
P.

1
.

2
2

%
8
6
.
4
9

g
n
je
h
c
t
un
m

r
o
je
o
C

)
e
v
je
s
s
un
p
(

)

5

.

3
1

2

8

.

8

7
.
0
(

4
.

6
2

%
5
6
.
4
9

oui
r
o
g
e
t
un
c

j

t
c
e
b
Ô

)
0
(

8
×

8

s

m
0
0
9

n
o
je
t
c
e
t
e
d

)
e
v
je
t
c
un
(

e
m
T

je

s

m
0
5

3

6

5
.

0
3

UN
/
N

)
n
o
je
t
un
X
fi
(

)
0
(

k
s
un
t
o
N

5
.
6
×

0
.
7

s

m
0
0
5

2
1

je

s
toi
toi
m

je
t
S
#

j

t
c
e
b
Ô
#

h
c
t
o
N

n
o
je
t
je
t
e
p
e
R.

s
e
je
r
o
g
e
t
un
C

g
n
je
r
e
t
je
je
F

s
s
un
p
d
n
un
B

g
n
je
r
e
t
je
je
F

#

s
e
d
o
r
t
c
e
je
E

t
e
S
un
t
un
D

.

oui
d
toi
t
S
s
je
h
T
n
je

d
e
s
U
s
t
e
S
un
t
un
D
e
e
r
h
T
e
h
t

F
o
s
je
je
un
t
e
D

:
1
e
je
b
un
T

4

4

6

F

b
oui
g
toi
e
s
t

t

o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3

o
N

z
H
0
5

3
0

.

0

8
2
1

,
.
je
un
t
e
o
r
je
h
s
e
n
un
K

3

5
1
0
2

,
je
n
un
h
un
b
z
toi
o
R.

,

b
un
h
un
V

,
r
toi
o
p
m
je
h
un
r
b
E

,
j
un
h
n
e
M.
&

9
1
0
2

z
H
0
5

z
H
0
0
2

3
0

.

0

z
H
0
5

z
H
0
0
2

3
0

.

0

1
3

1
3

je
n
un
h
un
b
z
toi
o
R.

un
7
1
0
2

,
.
je
un
t
e


je

m

je
r
un
K


je

m

je
r
un
K

1

2

Additional Information in Temporal Variability of Evoked Potentials

3033

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

/

e
d
toi
n
e
c
o
un
r
t
je
c
e

p
d

/

je

F
/

Chiffre 1: Paradigms of the data sets used in this study. Data set 1 (top row)
presented two consecutive object images, each with a fixation dot. The partici-
pant’s task was to indicate if the fixation dots were the same or different colors
across the image pairs (passive task). Data set 2 (middle row) presented objects
from the target and nontarget categories in sequences of 12 images. The partici-
pant’s task was to indicate, for each image, if it was from the target or nontarget
catégorie (active task). Data set 3 (bottom row), presented sequences of object
images from six categories. Participants did not have any specific tasks except
for looking at the center of the image (no overt task). More details about the data
sets in the relevant references are provided in Table 1.

data sets previously successfully provided object category information us-
ing multivariate analyses.

2.1.1 Data Set 1. We previously collected data set 1 while participants
were briefly (c'est à dire., 50 ms) presented with gray-scale images from four syn-
thetically generated 3D object categories (Karimi-Rouzbahani et al., 2017un).
The objects underwent systematic variations in scale, positional periphery,
in-depth rotation, and lighting conditions, which made perception difficult,
especially in extreme variation conditions. Randomly ordered stimuli were
presented in consecutive pairs (voir la figure 1, top row). The participants’
task was unrelated to object categorization; they pressed one of two pre-
determined buttons to indicate if the fixation dots, superimposed on the

/

/

/

3
3
1
1
3
0
2
7
1
9
6
6
5
9
2
n
e
c
o
_
un
_
0
1
4
3
6
p
d

.

/

F

b
oui
g
toi
e
s
t

t

o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3

3034

H. Karimi-Rouzbahani et al.

first and second stimuli, were the same or a different color (two-alternative
forced choice).

2.1.2 Data Set 2. We collected data set 2 in an active categorization exper-
iment, in which participants pressed a button if the presented object image
was from a target category (go/no-go), which was cued at the beginning
of each block of 12 stimuli (Karimi-Rouzbahani, Vahab, Ebrahimpour, &
Menhaj, 2019; voir la figure 1, middle row). The object images, which were
cropped from photographs, were part of the well-established benchmark
image set for object recognition developed by Kiani, Esteky, Mirpour, et
Tanaka (2007). This image set has been previously used to extract object cat-
egory information from both human and monkey brain using MEG (Calme
et coll., 2014), IRMf (Cichy et al., 2014; Kriegeskorte et al., 2008), and single-
cell electrophysiology (Kriegeskorte et al., 2008; Kiani et al., 2007).

2.1.3 Data Set 3. We also used another data set (data set 3), ce qui était
not collected in our lab. This data set was collected by Kaneshiro et al. (2015)
on six sessions for each participant. We used the first session only because
it could represent the whole data set (the next sessions were repetition of
the same stimuli to increase the signal-to-noise ratio) and we preferred to
avoid a potential effect of extended familiarity with the stimuli on neu-
ral representations. The EEG data were collected during passive viewing
(participants had no task but to keep fixating on the central fixation cross;
voir la figure 1, bottom row) of six categories of objects with stimuli chosen
from Kiani et al. (2007) as explained above. We used a preprocessed (c'est à dire.,
bandpass-filtered in the range 0.03 à 50 Hz) version of the data set, lequel
was available online.1

All three data sets were collected at a sampling rate of 1000 Hz. For data
sets 1 et 2, only the trials that led to correct responses by participants
were used in the analyses. Each data set consisted of data from 10 par-
ticipants. Each object category in each data set included 12 exemplars. À
make the three data sets as consistent as possible, we preprocessed them
differently from their original papers. Spécifiquement, the bandpass filtering
range of data set 3 était 0.03 à 50 Hz, and we did not have access to the
raw data to increase the upper cutting frequency to 200 Hz. Data sets 1
et 2 were bandpass-filtered in the range 0.03 à 200 Hz before the data
were split into trials. We also applied 50 Hz notch filters to data sets 1 et
2 to remove line noise. Suivant, we generated different versions of the data
by bandpass-filtering the data in delta (0.5–4 Hz), theta (4–8 Hz), alpha
(8–12 Hz), beta (12–16 Hz), and gamma (16–200 Hz) bands to see if there
is any advantage for the suggested theta or delta frequency bands (Wa-
trous et al., 2015; Behroozi et al., 2016; Wang, Wang, & Yu, 2018). Nous avons utilisé

1

https://purl.stanford.edu/tc919dd5388.

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

/

e
d
toi
n
e
c
o
un
r
t
je
c
e

p
d

/

je

F
/

/

/

/

3
3
1
1
3
0
2
7
1
9
6
6
5
9
2
n
e
c
o
_
un
_
0
1
4
3
6
p
d

.

/

F

b
oui
g
toi
e
s
t

t

o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3

Additional Information in Temporal Variability of Evoked Potentials

3035

finite-impulse-response (FIR) filters with 12 dB roll-off per octave for
bandpass-filtering of data sets 1 et 2 and when evaluating the sub-bands
of the three data sets. All the filters were applied before splitting the data
into trials.

We did not remove artifacts (par exemple., eye related and movement related)
from the signals, as we and others have shown that sporadic artifacts have
minimal effect in multivariate decoding (Grootswagers et al., 2017). À
increase signal-to-noise ratios in the analyses, each unique stimulus was
presented to the participants 3, 6, et 12 times in data sets 1, 2, et 3, concernant-
spectively. Trials were defined in the time window from 200 ms before to
1000 ms after the stimulus onset to cover most of the range of event-related
neural activations. The average prestimulus (−200–0 ms relative to the stim-
ulus onset) signal amplitude was removed from each trial of the data. Pour
more information about each data set, see Table 1 and the references to their
original publications.

2.2 Features. EEG signals are generated by inhibitory and excitatory
postsynaptic potentials of cortical neurons. These potentials extend to the
scalp surface and are recorded through electrodes as amplitudes of volt-
age in units of microvolts. Researchers have been using different aspects
of these voltage recordings to obtain meaningful information about human
brain processes. The main focus of this study is to compare the information
content of features that are sensitive to temporal variabilities of neural ac-
tivations against the mean of activity within the analysis window, which is
conventionally used in decoding analysis (Grootswagers et al., 2017). Below
we explain the mathematical formulas for each feature used in this study.
We also provide brief information about potential underlying neural mech-
anisms that can lead to the information content provided by each feature.
We classified the features into five classes based on their mathematical
similarity to simplify the presentation of the results and their interpreta-
tion: moment, complexity, ERP, frequency domain, and multivalued fea-
photos. Cependant, the classification of the features is not strict, and the features
might be classified based on other criteria and definitions. Par exemple,
complexity itself has different definitions (Tononi & Edelman, 1998), tel que
degree of randomness or degrees of freedom in a large system of interacting
elements. There are also recent studies that split the variability features into
the three categories of variance-, frequency- and information theory-based
catégories (Waschke et al., 2021). Donc, each definition may exclude or
include some of our features in the class. It is of note that we used only the
features that were previously used to decode categories of evoked poten-
tials from EEG signals through multivariate decoding analysis. Nonethe-
less, there are definitely other features available, especially those extracted
from EEG time series collected during long-term monitoring of human neu-
ral representations in health and disorder (Fulcher & Jones, 2017). In pre-
senting the features’ formulas, we avoided repeating the terms from the

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

/

e
d
toi
n
e
c
o
un
r
t
je
c
e

p
d

/

je

F
/

/

/

/

3
3
1
1
3
0
2
7
1
9
6
6
5
9
2
n
e
c
o
_
un
_
0
1
4
3
6
p
d

.

/

F

b
oui
g
toi
e
s
t

t

o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3

3036

H. Karimi-Rouzbahani et al.

first feature to the last one. Donc, readers might need to go back a few
steps or features to find the definitions of the terms. Note that in this study,
the analyses are performed in either 1000 ms time windows (c'est à dire. number of
samples used for feature extraction: N = 1000) in the whole-trial analysis
ou 50 ms time windows (N = 50) in time-resolved analysis.

2.2.1 Moment Features. These features are the most straightforward and
intuitive ones from which we might be able to extract information about
neural processes. Mean, variance, skewness, and kurtosis are the first to
fourth moments of EEG time series and can provide information about the
shape of the signals and their deviation from stationarity which is the case
in evoked potentials (Rasoulzadeh et al., 2017; Wong Galka, Yamashita, &
Ozaki, 2006). These moments have been shown to be able to differentiate
visually evoked responses (Pouryzdian & Erfanian, 2010; Alimardani, Cho,
Boostani, & Hwang, 2018). The second to fourth moments are also catego-
rized as variance-based features in recent studies (Waschke et al., 2021).

Mean. Mean amplitude of an EEG signal changes in proportion to the
neural activation of the brain. It is by far the most common feature of the
recorded neural activations used in analyzing brain states and cognitive
processes in both univariate and multivariate analyses (Vidal et al., 2010;
Hebart & Boulanger, 2018; Grootswagers et al., 2017; Karimi-Rouzbahani et al.,
2019). In EEG, brain activation is reflected as the amplitude of the recorded
voltage across each electrode and the reference electrode at specific time
points. To calculate the mean feature, the first moment in statistics, the sam-
ple mean is calculated for each recorded EEG time series as

¯x = 1
N

N(cid:2)

t=1

xt,

(2.1)

where ¯x is the mean of the N time samples contained in the analysis window
and xt refers to the amplitude of the recorded sample at time point t. N can
be as small as unity as in the case of time-resolved EEG analysis (Grootswa-
gers et al., 2017) or so large that it can cover the whole trial in whole-trial
analyse. Accordingly, we set N = 1000 (1000 ms) and N = 50 (50 ms) pour
the whole-trial and time-resolved decoding analyses, respectivement.

Median. Compared to the mean feature, the median is less susceptible to
outliers (par exemple., spikes) in the time series, which might not come from neural
activations but rather from artifacts caused by, Par exemple, the recording
hardware, preprocessing, or eye-blinks. The median is calculated as

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

/

e
d
toi
n
e
c
o
un
r
t
je
c
e

p
d

/

je

F
/

/

/

/

3
3
1
1
3
0
2
7
1
9
6
6
5
9
2
n
e
c
o
_
un
_
0
1
4
3
6
p
d

.

/

F

b
oui
g
toi
e
s
t

t

o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3

Median(X ) =


⎪⎨

⎪⎩

(cid:8)

(cid:7)

N
2

X

(X[ N−1

2 ]+X[ N+1
2

2 ])


⎪⎬

⎪⎭

,

if N is even

if N is odd

(2.2)

Additional Information in Temporal Variability of Evoked Potentials

3037

where X is the ordered values of samples in the time series xt for t =
1, . . . , N.

Variance. The variance of an EEG signal is one of the simplest indicators
showing how much the signal is deviated from stationarity, c'est, from its
original baseline statistical properties (Wong et al., 2006). It is a measure of
signal variabilities (within trial here), has been shown to decline upon the
stimulus onset potentially as a result of neural coactivation, and has pro-
vided information about object categories in a recent EEG decoding study
(Karimi-Rouzbahani et al., 2017un). Variance is calculated as

p 2 = 1
N

N(cid:2)

t=1

(xt − ¯x)2.

(2.3)

Skewness. While variance is silent about the direction of the deviation
from the mean, skewness, the third signal moment, measures the degree of
asymmetry in the signal’s probability distribution. In symmetric distribu-
tion (c'est à dire., when samples are symmetric around the mean), skewness is zero.
Positive and negative skewness indicates right- and left-ward tailed distri-
bution, respectivement. As the visually evoked ERP responses usually tend to
be asymmetrically deviated in either a positive or negative direction, même
after baseline correction (Mazaheri & Jensen, 2008), we assume that skew-
ness should provide information about the visual stimulus if each category
modulates the deviation of the samples differentially. Skewness is calcu-
lated as

c

1

= 1
N

(cid:12)

N(cid:2)

t=1

(cid:13)

3

.

xt − ¯x
p

(2.4)

Kurtosis. Kurtosis reflects the degree of “tailedness” or “flattedness” of
the signal’s probability distribution. Accordingly, the more heaviness there
is in the tails, the less value of the kurtosis and vice versa. Based on previ-
ous studies, Kurtosis has provided distinct representations corresponding
to different classes of visually evoked potentials (Alimardani et al., 2018;
Pouryzdian & Erfanian, 2010). We test to see if it plays a more generalized
role in information coding (par exemple., coding of semantic aspects of visual infor-
mation) aussi. It is the fourth standardized moment of the signal, defined
comme

Kurt = 1
N

(cid:12)

N(cid:2)

t=1

(cid:13)

4

.

xt − ¯x
p

(2.5)

2.2.2 Complexity Features. There can potentially be many cases in which
simple moment statistics such as mean, median, variance, skewness, et

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

/

e
d
toi
n
e
c
o
un
r
t
je
c
e

p
d

/

je

F
/

/

/

/

3
3
1
1
3
0
2
7
1
9
6
6
5
9
2
n
e
c
o
_
un
_
0
1
4
3
6
p
d

.

/

F

b
oui
g
toi
e
s
t

t

o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3

3038

H. Karimi-Rouzbahani et al.

kurtosis, which rely on distributional assumptions, provide equal values for
distinct time series (par exemple., series A: 10, 20, 10, 20, 10, 20, 10, 20 versus series B:
20, 20, 20, 10, 20, 10, 10, 10) for both of which the five features provide equal
résultats. Donc, we need more complex and possibly nonlinear measures
that can detect subtle but meaningful temporal patterns from time series.
The analysis of nonlinear signal features has recently been growing, fol-
lowing the findings showing that EEG reflects weak but significant nonlin-
ear structures (Stam, 2005; Stêpieñ, 2002). Surtout, many studies have
shown that the complexity of EEG time series can significantly alter dur-
ing cognitive tasks such as visual (Bizas et al., 1999) and working memory
tasks (Sammer, 1999; Stam, 2000). Donc, it was necessary to evaluate
the information content of nonlinear features for our decoding of object
catégories. As mentioned above, the grouping of these nonlinear features
as “complexity” here is not strict, and the features included in this class are
those that capture complex and nonlinear patterns across time series. Al-
though the accurate detection of complex and nonlinear patterns generally
needs more time samples compared to linear patterns (Procaccia, 1988), it
has been shown that nonlinear structures can be detected from short EEG
time series as well (c'est à dire., through fractal dimensions; Preissl, Lutzenberger,
Pulvermüller, & Birbaumer, 1997). Néanmoins, we extract these features
from both time-resolved (50 samples) and whole-trial data (1000 samples)
to ensure we do not miss potential information represented in longer tem-
poral scales.

Lempel-Ziv complexity (LZ Cmplx). Lempel-Ziv complexity measures the
complexity of time series (Lempel & Ziv, 1976). Basically, the algorithm
counts the number of unique sub-sequences within a larger binary se-
quence. Accordingly, a sequence of samples with a certain regularity does
not lead to a large LZ complexity. Cependant, the complexity generally grows
with the length of the sequence and its irregularity. Autrement dit, it mea-
sures the generation rate of new patterns along a digital sequence. In a
comparative work, it was shown that compared to many other frequency
metrics of time series (par exemple., noise power, stochastic variability), LZ complex-
ity has the unique feature of providing a scalar estimate of the bandwidth
of time series and the harmonic variability in quasi-periodic signals (Aboy,
Hornero, Abásolo, & Álvarez, 2006). It is widely used in biomedical sig-
nal processing and has provided successful results in the decoding of vi-
sual stimuli from neural responses in primary visual cortices (Szczepa ´nski,
Amigó, Wajnryb, & Sanchez-Vives, 2003). We used the code by Quang Thai2
implemented based on “exhaustive complexity,” which is considered to
provide the lower limit of the complexity as explained by Lempel and Ziv
(1976). We used the signal median as a threshold to convert the signals into

https://www.mathworks.com/matlabcentral/fileexchange/38211-calc_lz_complex

2

ville.

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

/

e
d
toi
n
e
c
o
un
r
t
je
c
e

p
d

/

je

F
/

/

/

/

3
3
1
1
3
0
2
7
1
9
6
6
5
9
2
n
e
c
o
_
un
_
0
1
4
3
6
p
d

.

/

F

b
oui
g
toi
e
s
t

t

o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3

Additional Information in Temporal Variability of Evoked Potentials

3039

binary sequences for the calculation of LZ complexity. The LZ complexity
provided a single value for each signal time series.

Fractal dimension. In signal processing, fractal is an indexing technique
that provides statistical information about the complexity of time series. UN
higher fractal value indicates more complexity for a sequence as reflected
in more nesting of repetitive sub-sequences at all scales. Fractal dimensions
are widely used to measure two important attributes: self-similarity and
the shape of irregularity. A growing set of studies has been using fractal
analyses for the extraction of information about semantic object categories
(such as living and nonliving categories of visual objects; Ahmadi-Pajouh,
Ala, Zamanian, Namazi, & Jafari, 2018; Torabi et al., 2017), as well as sim-
ple checkerboard patterns (Namazi, Ala, & Bakardjian, 2018) from visually
evoked potentials. Dans cette étude, we implemented two of the common meth-
ods for the calculation of fractal dimensions of EEG time series, which have
been previously used to extract information about object categories as ex-
plained below. We used the implementations by Jesús Monge Álvarez3 for
fractal analysis.

In Higuchi’s fractal dimension (Higuchi FD; Higuchi, 1988), a set of sub-
sequences xm
k is generated in which k and m refer to the step size and initial
valeur, respectivement. Then the length of this fractal dimension is calculated
comme

(cid:14)(cid:15)

(cid:16)

[ N−m
k ]
je = 1

=

Lm
k

|X(m+ik)

− x(m+(i−1).k)

|

k

(cid:17)

(cid:18)

N−1
[ N−m
k ].k

,

(2.6)

where N−1
is the normalization factor The length of the fractal curve at
[ N−m
k ].k
step size of k is calculated by averaging k sets of Lm
k . Finally the resultant
average will be proportional to k−D where D is the fractal dimension. Nous
set the free parameter of k equal to half the length of signal time series in
the current study.

We also calculated fractal dimension using Katz’s method (Katz FD;
Katz, 1988) as it showed a significant amount of information about object
categories in a previous study (Torabi et al., 2017). The fractal dimension
(D) is calculated as
(cid:20)

(cid:19)

D = log10
log10

(cid:19)

L
un
d
un

(cid:20) =

log10r
(cid:20)
(cid:19)
d
L

+ log10r

,

log10

(2.7)

where L and a refer to the sum and average of the consecutive signal sam-
ples, respectivement. Also d refers to the maximum distance between first

3

https://ww2.mathworks.cn/matlabcentral/fileexchange/50290-higuchi-and-katz-

fractal-dimension-measures.

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

/

e
d
toi
n
e
c
o
un
r
t
je
c
e

p
d

/

je

F
/

/

/

/

3
3
1
1
3
0
2
7
1
9
6
6
5
9
2
n
e
c
o
_
un
_
0
1
4
3
6
p
d

.

/

F

b
oui
g
toi
e
s
t

t

o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3

3040

H. Karimi-Rouzbahani et al.

sample the ith sample of the signal, which has the maximum distance from
the first sample as

L =

N(cid:2)

i=2

|xi

− xi−1

|,

d = max(distance(1, je)),
r = L/a.

(2.8)

(2.9)

(2.10)

Hurst exponent. The Hurst exponent (Hurst Exp) is widely used to mea-
sure long-term memory in time-dependent random variables such as bio-
logical time series (Racine, 2011). Autrement dit, it measures the degree of
interdependence across samples in the time series and operates like an au-
tocorrelation function over time. Hurst values between 0.5 et 1 suggérer
the consecutive appearance of high signal values on large timescales while
values between 0 et 0.5 suggest frequent switching between high and low
signal values. Values around 0.5 suggest no specific patterns among sam-
ples of a time series. It is defined as an asymptotic behavior of a rescaled
range as a function of the time span of the time series defined as


⎣ max(z1

, z2

E

, . . . , zN ) − min(z1
(cid:23)
t=1(xt − ¯x)2
N

(cid:16)

1
N

, z2

, . . . , zN )

⎦ = C.NH as N → ∞,

(2.11)

t(cid:2)

zt =

; t = 1, . . . , N,

yi

je = 1
yt = xt − ¯x,

(2.12)

(2.13)

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

/

e
d
toi
n
e
c
o
un
r
t
je
c
e

p
d

/

je

F
/

/

/

/

3
3
1
1
3
0
2
7
1
9
6
6
5
9
2
n
e
c
o
_
un
_
0
1
4
3
6
p
d

.

/

where E is the expected value, C is a constant and H is the Hurst exponent
(Racine, 2011) We used the open-source implementation of the algorithm,4
which has also been used previously for the decoding of object category
information in EEG (Torabi et al., 2017).

Entropy. Entropy can measure the perturbation in time series (Waschke
et coll., 2021). A higher value for entropy suggests a higher irregularity in the
given time series. Precise calculation of entropy usually requires a consid-
erable number of samples and is also sensitive to noise. Here we used two
methods for the calculation of entropy, each of which has advantages over
the other.

Approximate entropy (Apprx Ent) was initially developed to be used for
medical data analysis (Pincus & Huang, 1992), such as heart rate, and then

F

b
oui
g
toi
e
s
t

t

o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3

4

https://www.mathworks.com/matlabcentral/fileexchange/9842-hurst-exponent.

Additional Information in Temporal Variability of Evoked Potentials

3041

was extended to other areas such as brain data analysis. It has the advan-
tage of requiring a low computational power, which makes it perfect for
real-time applications on low sample sizes (<50). However, the quality of this entropy method is impaired on lower lengths of the data. This metric detects changes in episodic behavior, which are not represented by peak oc- currences or amplitudes (Pincus & Huang, 1992). We used an open-source code5 for calculating approximate entropy. We set the embedded dimen- sion and the tolerance parameters to 2% and 20% of the standard devia- tion of the data, respectively, to roughly follow a previous study (Shourie, Firoozabadi, & Badie, 2014), which compared approximate entropy in vi- sually evoked potentials and found differential effects across artist versus nonartist participants when looking at paintings. Sample entropy (Sample Ent), a refinement of the approximate entropy, is frequently used to calculate the regularity of biological signals (Richman & Moorman, 2000). Basically, it is the negative natural logarithm of the con- ditional probability that two sequences (subset of samples) that are similar for m points remain similar at the next point. A lower sample entropy also reflects a higher self-similarity in the time series. It has two main advantages to the approximate entropy: it is less sensitive to the length of the data and is simpler to implement. However, it does not focus on self-similar patterns in the data. We used the Matlab entropy function for the extraction of this fea- ture, which has already provided category information in a previous study (Torabi et al., 2017). (See Richman & Moorman, 2000, and Subha, Joseph, Acharya, & Lim, 2010, for the details of the algorithm.) Autocorrelation. Autocorrelation (Autocorr) determines the degree of similarity between the samples of a given time series and a time-lagged version of the same series. It detects periodic patterns in signals, which is an integral part of EEG time series. Therefore, following recent successful attempts in decoding neural information using the autocorrelation func- tion from EEG signals (Wairagkar, Zoulias, Oguntosin, Hayashi, & Nasuto, 2016), we evaluated the information content of the autocorrelation func- tion in decoding visual object categories. As neural activations reflect many repetitive patterns across time, the autocorrelation function can quantify the information contents of those repetitive patterns. Autocorrelation is calcu- lated as R(τ ) = 1 (N − τ )σ 2 N−τ(cid:2) t=1 (xt − ¯x)(xt+τ − ¯x), (2.14) where τ indicates the number of lags in samples of the shifted signal. A positive value for autocorrelation indicates a strong relationship between 5 https://www.mathworks.com/matlabcentral/fileexchange/32427-fast-approximat e-entropy. l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . / e d u n e c o a r t i c e - p d / l f / / / / 3 3 1 1 3 0 2 7 1 9 6 6 5 9 2 n e c o _ a _ 0 1 4 3 6 p d . / f b y g u e s t t o n 0 7 S e p e m b e r 2 0 2 3 3042 H. Karimi-Rouzbahani et al. the original time series and its shifted version, whereas a negative autocor- relation refers to an opposite pattern between them. Zero autocorrelation indicates no relationship between the original time series and its shifted version. In this study, we extracted autocorrelations for 30 consecutive lags ([τ = 1, 2, . . . , 30]) and used their average in classification. Note that each lag refers to 1 ms as the data were sampled at 1000 Hz. Hjorth parameters. These are descriptors of statistical properties of signals introduced by Hjorth (1970). These parameters are widely used in EEG sig- nal analysis for feature extraction across a wide set of applications includ- ing visual recognition (Joshi et al., 2018; Torabi et al., 2017). These features consist of activity, mobility, and complexity as defined below. As the activ- ity parameter is equivalent to the signal variance, which we already Hjorth complexity (Hjorth Cmp) determines the variation in time series’ frequency by quantifying the similarity between the signal and a pure sine wave lead- ing to a value of one in case of perfect match In other words, values around one suggest lower complexity for a signal. It is calculated as Complexity = Mobility ( dxt dt ) Mobility (xt ) . (2.15) Hjorth mobility (Hjorth Mob) determines the proportion of standard de- viation of the power spectrum as is calculated below, where var refers to the signal variance: (cid:26) (cid:27) (cid:27) (cid:28) var (cid:29) (cid:30) dxt dt var(xt ) Mobility = . (2.16) l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . / e d u n e c o a r t i c e - p d / l f / / / / 3 3 1 1 3 0 2 7 1 9 6 6 5 9 2 n e c o _ a _ 0 1 4 3 6 p d . / 2.2.3 ERP Components (N1, P1, P2a, and P2b). An ERP is a measured brain response to a specific cognitive, sensory, or motor event that provides an approach to studying the correlation between the event and neural pro- cessing. According to the latency and amplitude, ERP is split into specific subwindows called components. Here, we extracted ERP components by calculating the mean of signals in specific time windows to obtain the P1 (80–120 ms), N1 (120–200 ms), P2a (150–220 ms), and P2b (200–275 ms) com- ponents, which were shown previously to provide significant amounts of information about visual object and face processing in univariate (Rossion et al., 2000; Rousselett, Husk, Bennett, & Sekuler, 2007) and multivariate analyses (Chan et al., 2011; Jadidi, Zargar, & Moradi, 2016; Wang et al., 2012). As these components are calculated in limited and specific time windows, in the whole-trial analysis, they reflect the mean of activity in their specific time windows, rather than the whole post-stimulus window. They will be also absent from time-resolved analyses by definition. f b y g u e s t t o n 0 7 S e p e m b e r 2 0 2 3 Additional Information in Temporal Variability of Evoked Potentials 3043 2.2.4 Frequency-Domain Features. Neural variability is commonly ana- lyzed in frequency domain by calculating spectral power across frequency bands. Specifically, as data transformation from time to frequency domain is almost lossless using Fourier transform, oscillatory power basically reflects frequency-specific variance (with the total power reflecting the overall vari- ance of the time series; Waschke et al., 2021). Motivated by previous studies showing signatures of object categories in the frequency domain (Behroozi et al., 2016; Rupp et al., 2017; Iranmanesh & Rodriguez-Villegas, 2017; Joshi et al., 2018; Jadidi et al., 2016) and the representation of temporal codes of visual information in the frequency domain (Eckhorn et al., 1988), we also extracted frequency-domain features to see if they could provide ad- ditional category-related information to time-domain features. It is of note that while the whole-trial analysis allows us to compare our results with previous studies, the evoked EEG potentials are generally nonstationary (i.e., their statistical properties change along the trial) and potentially dom- inated by low-frequency components. Therefore, the use of time-resolved analysis, which looks at more stationary subwindows of the signal (e.g., 50 samples here), will allow us to detect subtle high-frequency patterns of neural codes. Signal power (Signal Pw). Power spectrum density (PSD) represents the in- tensity or the distribution of the signal power into its constituent frequency components. This feature was motivated by previous studies showing as- sociations between aspects of visual perception and power in certain fre- quency bands (Rupp et al., 2017; Behroozi et al., 2016; Majima et al., 2014). According to the Fourier analysis, signals can be broken into their con- stituent frequency components or a spectrum of frequencies in a specific frequency range. Here, we calculated signal power using the PSD as in ˜Sxx(w) = ((cid:5)t)2 T 2 (cid:31) (cid:31) (cid:31) (cid:31) (cid:31) , −iwn(cid:5)t xne (cid:31) (cid:31) N(cid:2) (cid:31) (cid:31) (cid:31) n=1 (2.17) where xn = xn(cid:5)t is signal sampled at a rate of T = 1 (cid:5)t and w is the frequency at which the signal power is calculated. As signal power is a relatively broad term, including the whole power spectrum of the signal, we also extracted a few more parameters from the signal frequency representation to see what specific features in the frequency domain (if any) can provide information about object categories. Mean frequency (Mean Freq). Motivated by the successful application of mean and median frequencies in the analysis of EEG signals and their rela- tionship to signal components in the time domain (Intrilligator & Polich, 1995; Abootalebi, Moradi, & Khalilzadeh, 2009), we extracted these two features from the signal power spectrum to obtain a more detailed insight into the neural dynamics of category representations. Mean frequency is the l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . / e d u n e c o a r t i c e - p d / l f / / / / 3 3 1 1 3 0 2 7 1 9 6 6 5 9 2 n e c o _ a _ 0 1 4 3 6 p d . / f b y g u e s t t o n 0 7 S e p e m b e r 2 0 2 3 3044 H. Karimi-Rouzbahani et al. average of the frequency components available in a signal. Assume a signal consisting of two frequency components of f1 and f2. The mean frequency of this signal is fmean = f1 . Generally the mean normalized (by the inten- sity) frequency is calculated using the following formula, + f2 2 fmean = n (cid:16) i=0 li fi(cid:16) n i=0 li , (2.18) where n is the number of splits of the PSD and fi and li are the frequency and the intensity of the PSD in its ith slot, respectively It was calculated using Matlab meanfreq function. Median frequency (Med Freq). This is the median normalized frequency of the power spectrum of a time-domain signal. It is calculated similar to the signal median in the time domain; however, here the values are the power intensity in different frequency bins of the PSD. This feature was calculated using Matlab medfreq function. Power and phase at median frequency (Pw MdFrq and Phs MdFrq). Interest- ingly, apart from the median frequency itself, which reflects the frequency aspect of the power spectrum, the power and phase of the signal at the me- dian frequency have also been shown to be informative about aspects of human perception (Joshi et al., 2018; Jadidi et al., 2016). Therefore, we also calculated the power and phase of the frequency-domain signals at the me- dian frequency as features. Average frequency (Avg Freq). Evoked potentials show a few positive and negative peaks after the stimulus onset, and they might show deviation in the positive or negative directions depending on the information content (Mazaheri & Jensen, 2008). Therefore, we also evaluated the average (zero- crossing) frequency of the ERPs by counting the number of times the signal swapped signs during the trial. Note that each trial is baselined according to the average amplitude of the same trial in the last 200 ms immediately before the stimulus onset. We calculated the average frequency on the post- stimulus time window. Spectral edge frequency (SEF 95%). This is a common feature used in mon- itoring the depth of anesthesia and stages of sleep using EEG (Iranmanesh & Rodriguez-Villegas, 2017). It measures the frequency that covers X per- cent of the PSD. X is usually set in the range of 75% to 95%. Here we set X to 95%. Therefore, this reflects the frequency observed in a signal that covers 95% of a signal power spectrum. 2.2.5 Multivalued Features. The main hypothesis of this study is that we can potentially obtain more information about object categories as well as behavior if we take into account the temporal variability of neural activity within the analysis window (i.e., trial) rather than averaging the samples as in conventional decoding analyses. While the above variability-sensitive l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . / e d u n e c o a r t i c e - p d / l f / / / / 3 3 1 1 3 0 2 7 1 9 6 6 5 9 2 n e c o _ a _ 0 1 4 3 6 p d . / f b y g u e s t t o n 0 7 S e p e m b e r 2 0 2 3 Additional Information in Temporal Variability of Evoked Potentials 3045 features return a single value from each individual time series (analysis window), a more flexible feature would allow as many informative pat- terns to be detected from an individual time series. Therefore, we extracted other features, which provide more than one value per analysis window, so that we can select the most informative values from across electrodes and time points simultaneously (see “Dimensionality reduction” below). We also included the original magnitude data as our reference feature, so that we know how much (if at all) our feature extraction and selection pro- cedures improved decoding. Interelectrode correlation (Cross Corr). Following up on recent studies that have successfully used interarea correlation in decoding object category in- formation from EEG activations (Majima et al., 2014; Karimi-Rouzbahani et al., 2017a; Tafreshi, Daliri, & Ghodousi, 2019), we extracted interelectrode correlation to measure the similarity between pairs of signals—here, from different pairs of electrodes. This feature of correlated variability quanti- fies covariability of neural activations across pairs of electrodes. Although closer electrodes tend to provide more similar (and therefore correlated) activation, compared to further electrodes (Hacker, Snyder, Pahwa, Cor- betta, & Leuthardt, 2017), the interelectrode correlation can detect correla- tions that are functionally relevant and are not explained by the distance (Karimi-Rouzbahani et al., 2017a). This feature detects similarities in tem- poral patterns of fluctuations across time between pairs of signals, which is calculated as Rxy = 1 Nσxσy N(cid:2) t=1 (xt − ¯x)(yt − ¯y), (2.19) where x and y refer to the signals obtained from electrodes x and y, respec- tively. We calculated the cross-correlation between each electrode and all the other electrodes to form a cross-correlation matrix. Accordingly, we ini- tially obtained all the unique possible pairwise interelectrode correlations (465, 465, and 8128 unique values for data sets 1, 2, and 3, respectively), which were then reduced in dimension using PCA to the equal number of dimensions obtained for single-valued features. Wavelet transform (wavelet). Recent studies have shown remarkable suc- cess in decoding object categories using the wavelet transformation of the EEG time series (Taghizadeh-Sarabi et al., 2015; Torabi et al., 2017). Considering the time- and frequency-dependent nature of ERPs, wavelet transform seems to be a reasonable choice as it provides a time-frequency representation of signal components. It determines the primary frequency components and their temporal position in time series. The transformation passes the signal time series through digital filters (Guo, Rivero, Seoane, & Pazos, 2009; see equation 2.20) using the convolution operator, each of which adjusted to extract a specific frequency (scale) at a specific time as in l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . / e d u n e c o a r t i c e - p d / l f / / / / 3 3 1 1 3 0 2 7 1 9 6 6 5 9 2 n e c o _ a _ 0 1 4 3 6 p d . / f b y g u e s t t o n 0 7 S e p e m b e r 2 0 2 3 3046 H. Karimi-Rouzbahani et al. yn = (x ∗ g) = +∞(cid:2) k=−∞ xkgn−k , (2.20) where g is the digital filter and ∗ is the convolution operator. This fil- tering procedure is repeated for several rounds (levels) filtering low- (approximations) and high-frequency (details) components of the signal to provide more fine-grained information about the constituent components of the signal. This can lead to coefficients that can potentially discriminate signals evoked by different conditions. Following up on a previous study (Taghizadeh-Sarabi et al., 2015) and to make the number of wavelet fea- tures comparable in number to signal samples, we used detail coefficients at five levels, D1, . . . , D5, as well as the approximate coefficients at level 5, A5. This led to 1015 and 57 features in the full trial and in the 50 ms slid- ing time windows, respectively. We used the Symlet2 basis function for our wavelet transformations as implemented in Matlab. Hilbert transform (Hilb Amp and Hilb Phs). Hilbert transform provides am- plitude and phase information about the signal and has recently shown successful results in decoding visual letter information from ERPs (Wang et al., 2018). The phase component of the Hilbert transform can qualita- tively provide the spatial information obtained from the wavelet transform, leading to their similarity evaluating neuronal synchrony (Le Van Quyen et al., 2001). However, it is still unclear which method can detect category- relevant information from the nonstationary ERP components more effec- tively. Hilbert transform is described as a mapping function that receives a real signal xt (as defined above), and upon convolution with the function 1 πt produces another function of a real variable H(x)(t) as H(x)(t) = 1 n +∞ −∞ xτ t − τ dτ, (2.21) where H(x)(t) is a frequency-domain representation of the signal xt, which π has simply shifted all the components of the input signal by 2 . Accord- ingly, it produces one amplitude and one phase component per sample in the time series. In the current study, Hilbert transform was applied on 1000 and 50 samples in the whole-trial and time-resolved analysis, respectively. We used the amplitude and phase components separately to discriminate object categories in the analyses. Amplitude and phase locking (Amp Lock and Phs Lock). Although interelec- trode correlated variability (Cross Corr), which is interpreted as inter-area connectivity, has successfully provided object category information (Ma- jima et al., 2014; Karimi-Rouzbahani et al., 2017a), previous studies sug- gested that neural communication is realized through amplitude and phase locking and coupling (Bruns, Eckhorn, Jokeit, & Ebner, 2000; Siegel, Don- ner, & Engel, 2012; Engel, Gerloff, Hilgetag, & Nolte, 2013). More recently, l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . / e d u n e c o a r t i c e - p d / l f / / / / 3 3 1 1 3 0 2 7 1 9 6 6 5 9 2 n e c o _ a _ 0 1 4 3 6 p d . / f b y g u e s t t o n 0 7 S e p e m b e r 2 0 2 3 Additional Information in Temporal Variability of Evoked Potentials 3047 researchers have quantitatively shown that amplitude and phase locking detect distinct signatures of neural communication across time and space from neural activity (Siems & Siegel, 2020; Mostame & Sadaghiani, 2020). Therefore, in line with recent studies, which successfully decoded object categories using inter-area-correlated variability as neural codes (Tafreshi et al., 2019), we extracted amplitude and phase locking as two major con- nectivity features that might contain object category information as well. Briefly, amplitude locking refers to the coupling between the envelopes of two signals (electrodes) and reflects the correlation of activation amplitude. To estimate the amplitude locking between two signals, we extracted the en- velopes of the two signals using Hilbert transform (Gabor, 1946; explained below), then estimated the Pearson correlation between the two resulting envelopes as amplitude locking. Phase locking refers to the coupling between the phases of two sig- nals and measures the synchronization of rhythmic oscillation cycles. To measure phase locking, we used one of the simplest implementations, the phase-locking value (PLV), which includes minimal mathematical assump- tions (Bastos & Schoffellen, 2016), calculated as PLV = 1 N (cid:31) (cid:31) (cid:31) (cid:31) (cid:31) , (cid:5)(cid:7) e i (cid:31) (cid:31) (cid:31) (cid:31) (cid:31) N(cid:2) i=1 (2.22) where N is the number of trials and (cid:5)(cid:7) is the phase difference between the signals to electrode pairs. As we used multivariate decoding without any trial averaging, N was equal to one here. The calculation of amplitude and phase locking was performed on all electrode pairs leading to 465 and 8128 unique numbers for the 31- (data sets 1 and 2) and 128-electrode (data set 3) data sets before dimension reduction was performed. Original magnitude data (Orig Mag). We also used the poststimulus origi- nal magnitude data (1000 or 50 samples for the whole-trial and sliding time windows, respectively) to decode object category information without any feature extraction. This provided a reference to compare the information content of the mean and variability features to see if the former provided any extra information. 2.3 Multivariate Decoding. We used multivariate decoding to extract information about object categories from our EEG data sets. Basically, mul- tivariate decoding, which has been dominating neuroimaging studies re- cently (Haynes & Rees, 2006; Grootswagers et al., 2017; Hebart & Baker, 2018), measures the cross-condition dissimilarity or contrast to quantify in- formation content in neural representations. We used linear discriminant analysis (LDA) classifiers in multivariate analysis to measure the informa- tion content across all possible pairs of object categories within each data set. Specifically, we trained and tested the classifiers on animal versus car, l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . / e d u n e c o a r t i c e - p d / l f / / / / 3 3 1 1 3 0 2 7 1 9 6 6 5 9 2 n e c o _ a _ 0 1 4 3 6 p d . / f b y g u e s t t o n 0 7 S e p e m b e r 2 0 2 3 3048 H. Karimi-Rouzbahani et al. animal versus face, animal versus plane, car versus plane, face versus car and plane versus face categories, then averaged the six decoding results and reported them for each participant. The LDA classifier has been shown to be robust when decoding object categories from M/EEG (Grootswagers et al., 2017, 2019), has provided higher decoding accuracies than Euclidean distance and correlation-based decoding methods (Carlson et al., 2013), and was around 30 times faster to train in our initial analyses compared to the more complex classifier of support vector machines (SVM). We ran our initial analysis and found sim- ilar results for the LDA and SVM and used LDA to save the time. We used a 10-fold cross-validation procedure in which we trained the classifier on 90% of the data and tested it on the left-out 10% of the data, repeating the procedure 10 times until all trials from the pair of categories participated once in the training and once in the testing of the classifiers. We repeated the decoding across all possible pairs of categories within each data set, which were 6, 6, and 15 pairs for data sets 1, 2, and 3, which consisted of 4, 4, and 6 object categories, respectively. Finally, we averaged the results across all combinations and reported them as the average decoding for each participant. In the whole-trial analyses, we extracted the above-mentioned features from the 1000 data samples after the stimulus onset (from 1 to 1000 ms). In the time-resolved analyses, we extracted the features from 50 ms sliding time windows in steps of 5 ms across the time course of the trial (−200 to 1000 ms relative to the stimulus onset time). Therefore, in time-resolved analyses, the decoding rates at each time point reflect the results for the 50 ms window around the time point, from −25 to +24 ms relative to the time point. Time-resolved analyses allowed us to evaluate the evolution of object category information across time as captured by different features. 2.4 Dimensionality Reduction. The multivalued features (e.g. inter- electrode correlation, wavelet, Hilbert amplitude and phase, amplitude and phase locking, and original magnitude data) resulted in more than a single feature value per trial per sliding time window. This could provide higher decoding values compared to the decoding values obtained from single- valued features merely because of including a higher number of features. Moreover, when the features outnumber the observations (trials here), the classification algorithm can overfit to the data (Hart, Stork, & Duda, 2000). Therefore, to obtain comparable decoding accuracies across single-valued and multivalued features and to avoid potential overfitting of classifier to the data we used principal component analysis (PCA) to reduce the dimen- sion of the data in multivalued features. Accordingly, we reduced the num- ber of the values in the multivalued features to one per time window per trial, which equaled the number of values for the single-valued features. To avoid potential leakage of information from testing to training (Pulini, Kerr, Loo, & Lenartowicz, 2019), we applied the PCA algorithm on the l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . / e d u n e c o a r t i c e - p d / l f / / / / 3 3 1 1 3 0 2 7 1 9 6 6 5 9 2 n e c o _ a _ 0 1 4 3 6 p d . / f b y g u e s t t o n 0 7 S e p e m b e r 2 0 2 3 Additional Information in Temporal Variability of Evoked Potentials 3049 training data (folds) only and used the training PCA parameters (eigenvec- tors and means) for both training and testing sets for dimension reduction in each cross-validation run separately. We applied the dimension-reduction procedure only on the multivalued features. Note that we did not reduce the dimension of the neural space (columns in the dimension-reduced data ma- trix) to below the number of electrodes “e” (opposite of Hatamimajoumerd, Talebpour, & Mohsenzadeh, 2020) as we were interested in qualitatively comparing our results with the vast literature currently using multivariate decoding with all sensors (Grootswagers et al., 2017; Hebart & Baker, 2018). Also, we did not aim at finding more than one feature per trial, per time window, as we wanted to compare the results of multivalued features with those of single-valued features, which had only a single value per trial, per time window. One critical point here is that we applied the PCA on the concatenated data from all electrodes and values obtained from each individual feature (e.g., wavelet coefficients in wavelet) within each analysis window (e.g., 50 ms in time-resolved decoding). Therefore, for the multivalued features, the “e” selected dimensions, were the most informative spatial and tempo- ral patterns detected across both electrodes and time samples. Therefore, it could be the case that within a given time window, two of the selected di- mensions were from the same electrode (because two elements from the same electrode were more informative than the other electrode), which would lead to some electrodes not having any representatives among the selected dimensions. This is in contrast to the single-valued features (e.g., mean) from which we obtained only one value per analysis window per electrode, limiting the features to only the spatial patterns within the anal- ysis window, rather than both spatial and temporal patterns. 2.5 Statistical Analyses. 2.5.1 Bayes Factor Analysis. As in our previous studies (Grootswagers et al., 2019; Robinson, Grootswagers, & Carlson, 2019), to determine the evidence for the null and the alternative hypotheses, we used Bayes anal- yses as implemented by Bart Krekelberg based on Rouder, Morey, Speck- man, and Province (2012). We used standard rules of thumb for interpreting levels of evidence (Lee & Wagenmakers, 2005; Dienes, 2014): Bayes factors of >10 et <1>3 et <1> 3) evidence for above-chance decoding for majority of fea-
photos (par exemple., moment features, complexity, and frequency-domain features;
see Supplementary Figure 1, black bars and their Bayesian analyses). Comment-
jamais, consistently across the three data sets, there was moderate (3 < BF < 1 0) or strong (BF > 10) evidence for above-chance decoding for all ERP com-
ponents (N1, P1, P2a, and P2b), wavelet coefficients (Wavelet), and original
magnitude data (Orig Mag), which were either targeted at specific time win-
dows within the trial (ERPs) or could detect temporal variabilities within
the trial (Wavelet and Orig Mag; see Figure 2A, black bars).

Surtout, in all three data sets, there was moderate (3 < BF < 10) or strong (BF > 10) evidence that ERP components of N1 and P2a provided
higher decoding values than the mean (see Figure 2B, black boxes in Bayes
matrices). There was also strong evidence (BF > 10) that the wavelet and
Orig Mag features outperformed the mean feature in data sets 2 et 3 (voir
Figure 2B, blue boxes in Bayes matrices). This shows that simply using the
earlier ERP components of N1 and P2a can provide more information than
using the mean activity across the whole trial. This was predictable, as the
mean across the whole trial simply ignores within-trial temporally specific
information. Fait intéressant, even ERPs were outperformed by wavelet and
Orig Mag features in data set 3 (but not the opposite across the three data
sets; see Figure 2B, violet boxes in Bayes matrices). This suggests that even
further targeting the most informative elements (wavelet) and/or data sam-
ples (Orig Mag) within the trial can lead to improved decoding. Note that
the wavelet and Orig Mag features provided the most informative tempo-
ral patterns and samples on the dimension reduction procedure applied on
their extracted features (see section 2).

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

/

e
d
toi
n
e
c
o
un
r
t
je
c
e

p
d

/

je

F
/

/

/

/

3
3
1
1
3
0
2
7
1
9
6
6
5
9
2
n
e
c
o
_
un
_
0
1
4
3
6
p
d

.

/

F

b
oui
g
toi
e
s
t

t

o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3

3052

H. Karimi-Rouzbahani et al.

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

/

e
d
toi
n
e
c
o
un
r
t
je
c
e

p
d

/

je

F
/

/

/

/

3
3
1
1
3
0
2
7
1
9
6
6
5
9
2
n
e
c
o
_
un
_
0
1
4
3
6
p
d

.

/

F

b
oui
g
toi
e
s
t

t

o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3

Chiffre 2: Whole-trial decoding of object categories in the three data sets across
the broad band and different frequency bands (UN) with their Bayesian analy-
ses (B). The results are presented only for features of mean, ERP components,
wavelet, and Orig Mag. For full results including other features, see Supple-
mentary Figures 1 et 2. (UN) The black horizontal dashed lines on the top
panels refer to chance-level decoding. Thick bars show the average decod-
ing across participants (error bars standard error across participants). Bayes
factors are shown in the bottom panel of each graph. Filled circles show moder-
ate to strong evidence for either hypothesis, and empty circles indicate insuffi-
cient evidence. They show the results of Bayes factor analysis when evaluating
the difference from chance-level decoding. (B) Top panel: Bayes matrices com-
pare the decoding results within each frequency band across features separated
by data sets. Bottom panel: Bayes matrices compare decoding results across

Additional Information in Temporal Variability of Evoked Potentials

3053

Following previous observations about the advantage of delta (Watrous
et coll., 2015; Behroozi et al., 2016) and theta (Wang et al., 2018) frequency
bands, we compared the information content in the delta (0.5–4 Hz), theta
(4–8 Hz), alpha (8–12 Hz), beta (12–16 Hz), gamma (16–200 Hz), and broad
frequency bands. We predicted the domination of theta frequency band,
following suggestions about the domination of the theta frequency band in
feedforward visual processing (Bastos et al., 2015). For our top-performing
ERP, wavelet, and Orig Mag features, we saw consistent domination of theta
followed by the alpha frequency band (see Figure 2A). Fait intéressant, for the
ERP components, the decoding in the theta band even outperformed the
broad band (BF > 3 for P2b), which contained the whole frequency spec-
trum. Note that as opposed to previous suggestions (Karaka¸s, Erzengin, &
Ba¸sar, 2000), the domination of the theta frequency band in ERP compo-
nents could not be trivially predicted by their timing relative to the stimu-
lus onset. If this was the case here, the P2b component (200–275 ms) should
have elicited its maximum information in the delta (0.5–4 Hz) and theta
(4–8 Hz), rather than the theta and alpha (8–12 Hz) frequency bands. Pour
the mean feature, the delta band provided the highest information level
comparable to the level of the broad band activity. This confirms that broad
band whole-trial mean activity, reflects the general trend of the signal (faible-
frequency component).

Ensemble, we observed that the features that are targeted at informative
windows of the trial (ERP components), and those sensitive to informative
temporal variabilities (wavelet and Orig Mag) could provide additional cat-
egory information to the conventionally used mean of activity. We observed
that the theta frequency band, which has been suggested to support feedfor-
ward information flow, is also dominant in our data sets, which are poten-
tially dominated by feedforward processing of visual information during
object perception. Suivant, we compare the temporal dynamics of information
encoding across our features.

3.2 Do the Features Sensitive to Temporal Variabilities Evolve over
Similar Time Windows to the Mean Feature? One main insight that EEG
decoding can provide is to reveal the temporal dynamics of cognitive pro-
cesse. Cependant, the mean activity, which has dominated the literature

different frequency bands and data set separately. Colors indicate different lev-
els of evidence for existing difference (moderate 3 < BF < 10, orange; strong BF > 10, yellow), no difference (moderate 0.1 < BF < 0.3, light blue; strong BF < 0.1, dark blue) or insufficient evidence (1 < BF < 3 green; 0.3 < BF < 1 cyan) for either hypotheses. For example, for data set 1, there is strong evidence for higher decoding values for the N1 feature in the theta and alpha bands than in the gamma band, as indicated by the red box. l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . / e d u n e c o a r t i c e - p d / l f / / / / 3 3 1 1 3 0 2 7 1 9 6 6 5 9 2 n e c o _ a _ 0 1 4 3 6 p d . / f b y g u e s t t o n 0 7 S e p e m b e r 2 0 2 3 3054 H. Karimi-Rouzbahani et al. l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . / e d u n e c o a r t i c e - p d / l f / / / / 3 3 1 1 3 0 2 7 1 9 6 6 5 9 2 n e c o _ a _ 0 1 4 3 6 p d . / f b y g u e s t t o n 0 7 S e p e m b e r 2 0 2 3 Figure 3: Time-resolved decoding of object categories from the three data sets for three of the target features (A) and their extracted timing and amplitude parameters (B–E). (A) Top section in each panel shows the decoding accura- cies across time, and the bottom section shows the Bayes factor evidence for the difference of the decoding accuracy compared to chance-level decoding. The solid lines show the average decoding across participants and the shaded area the standard error across participants. The horizontal dashed lines on the top panel refer to chance-level decoding. Filled circles in the Bayes factors show moderate to strong evidence for either difference or no difference from chance Additional Information in Temporal Variability of Evoked Potentials 3055 (Grootswagers et al., 2017), might hide or distort the true temporal dynam- ics as it ignores potentially informative temporal variabilities (codes) within the analysis window. Therefore, we systematically compared the informa- tion content of a large set of features that are sensitive to temporal variabil- ities using time-resolved decoding (50 ms sliding time windows in steps of 5 ms; see the rationale for choosing the 50 ms windows in Supplemen- tary Figure 3A). By definition, we do not have the time-resolved decoding results for the ERP components here. Before presenting the time-resolved decoding results, to validate the re- sults and suggestions made about our whole-trial decoding (see Figure 2), we performed two complementary analyses. First, we checked to see if the advantage of the theta-to-broad-band decoding in the whole-trial analysis (see Figure 2) could generalize to time-resolved decoding: we ob- served the same effect in the (variability-sensitive) wavelet feature (in many time points especially for data set 2; BF > 3), but not in the (variability-
insensitive) mean feature (see Supplementary Figure 3B). This could pos-
sibly be explained by the smoothing (low-pass filtering) effect of the mean
feature making both theta and broad band data look like low-frequency
data. Suivant, we used the spatiotemporal specificity of classifier weights and
time-resolved decoding to see if theta band information would show a feed-
forward trend on the scalp to support our earlier suggestion. Visual inspec-
tion suggests information spread from the posterior to the anterior parts
of the head (par exemple., as in feedforward models of visual processing; (Karimi-
Rouzbahani et al., 2017c; see Supplementary Figure 4), supporting the role
of theta -band activity in feedforward processing. Despite these observa-
tion, we used broad band signals in the following analyses to be able to

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

/

e
d
toi
n
e
c
o
un
r
t
je
c
e

p
d

/

je

F
/

/

/

/

3
3
1
1
3
0
2
7
1
9
6
6
5
9
2
n
e
c
o
_
un
_
0
1
4
3
6
p
d

.

/

level or across features, and empty circles indicate insufficient evidence for
either hypotheses. (B) Timing and amplitude parameters extracted from the
time-resolved accuracies in panel A. (B–E) Gauche: The maximum and average de-
coding accuracies, the time of maximum, and the first above-chance decoding.
The horizontal dashed lines refer to chance-level decoding. Thick bars show
the average across participants (error bars standard error across participants).
Bottom sections on panels B and C show the Bayes factor evidence for the differ-
ence of the decoding accuracy compared to chance-level decoding. Droite: Ma-
trices compare the parameters obtained from different features. Different levels
of evidence for existing difference (moderate 3 < BF < 10, orange; strong BF >
10, yellow), no difference (moderate 0.1 < BF < 0.3, light blue; strong BF < 0.1, dark blue), or insufficient evidence (1 < BF < 3 green; 0.3 < BF < 1 cyan) for either hypotheses. Filled circles in the Bayes factors show moderate to strong evidence for either hypothesis, and open circles indicate insufficient evidence. Single and double stars indicate moderate and strong evidence for difference between the parameters obtained from decoding curves of the three features. f b y g u e s t t o n 0 7 S e p e m b e r 2 0 2 3 3056 H. Karimi-Rouzbahani et al. compare our results with previous studies, which generally used the broad band activity. Time-resolved decoding analyses showed that for all features, includ- ing the complexity features, which were suggested to need large sample sizes (Procaccia, 1988), there was moderate (3 < BF < 10) or strong (BF >
10) evidence for above-chance decoding at some time points consistently
across the three data sets (see Supplementary Figure 5A). Cependant, all fea-
tures showed distinct temporal dynamics to each other and across data sets.
The dissimilarities between data sets could be driven by many data set–
specific factors, including duration of image presentation (Carlson et al.,
2013). Cependant, there were also similarities between the temporal dynam-
ics of different features. Par exemple, the time points of first strong (BF >
10) evidence for above-chance decoding ranged from 75 ms to 195 ms (voir
Supplementary Figures 5A and 5E), and the decoding values reached their
maxima in the range between 150 ms and 220 ms (see Supplementary Fig-
ures 5A and 5D) across features. This is consistent with many decoding
studies showing the temporal dynamics of visual processing in the brain
(Isik et al., 2014; Cichy et al., 2014; Karimi-Rouzbahani, Woolgar, & Rich,
2021). There was no feature that consistently preceded or followed other
features to suggest the existence of very early or late neural codes (see Sup-
plementary Figures 5D and 5E). There was more information decoded from
features of mean, median, variance, and several multivalued features, surtout-
cially wavelet and Orig Mag, compared to other features across the three
data sets (see Supplementary Figure 5A). The mentioned features dom-
inated other features in terms of both average and maximum decoding
accuracies (see Supplementary Figures 5B and 5C). A complementary anal-
ysis suggested a potential overlap between the neural codes that different
features detected (see Supplementary Figure 6).

We then directly compared the mean and the most

informative
variability-sensitive features (wavelet and Orig Mag). Consistently across
the data sets, there was moderate (3 < BF < 10) or strong (BF > 10) evi-
dence for higher decoding obtained by wavelet and Orig Mag compared to
the mean feature on time points before 200 ms poststimulus onset (see Fig-
ure 3A). After 200 ms, this advantage sustained (data set 3), disappeared
(data set 1), or turned into disadvantage (data set 2). Except for few very
short continuous intervals, during which wavelet provided higher decod-
ing values compared to Orig Mag, the two features provided almost the
same results (voir la figure 3, yellow dots on bottom panels). Comparing the
parameters of the decoding curves, we found moderate (3 < BF < 10) or strong (BF > 10) evidence for higher maximum decoding for the wavelet
and Orig Mag features than the mean feature in data sets 1 et 3 (voir
Figure 3B). There was also moderate (3 < BF < 10) evidence for higher maxi- mum decoding accuracy for wavelet versus Orig Mag (see Figure 3B). There was also strong (BF > 10) evidence for higher average decoding accuracy
for the wavelet and Orig Mag features over the mean feature in data set 3

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

/

e
d
toi
n
e
c
o
un
r
t
je
c
e

p
d

/

je

F
/

/

/

/

3
3
1
1
3
0
2
7
1
9
6
6
5
9
2
n
e
c
o
_
un
_
0
1
4
3
6
p
d

.

/

F

b
oui
g
toi
e
s
t

t

o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3

Additional Information in Temporal Variability of Evoked Potentials

3057

(see Figure 3C). There was also moderate (3 < BF < 10) evidence for higher maximum decoding for wavelet versus Orig Mag in data sets 2 and 3. These results show that the wavelet feature provides the highest maximum (in data set 3) and average (in data sets 2 and 3) decoding accuracies among the three features, followed by the Orig Mag feature. The measures of max- imum and average decoding accuracies were calculated in the poststimulus onset (0–1000 ms) for each participant separately. We also compared the tim- ing parameters of the decoding curves (i.e., the time to the first above chance and maximum decoding relative to stimulus onset) obtained for the three features (see Figures 3D and 3E), but found insufficient evidence (0.3 < BF < 3) for their difference. Together, these results suggest that the inclusion of temporal variabili- ties of activity can provide additional information about object categories to what is conventionally obtained from the mean of activity. Note that the advantage of wavelet and Orig Mag features cannot be explained by the size or dimensionality of the feature space, as the numbers of dimensions were equalized across features. Importantly, however, the decoding of informa- tion from temporal variabilities did not lead to different temporal dynam- ics of information decoding. This can be explained by either the common cognitive processes producing the decoded neural codes (i.e., object catego- rization), the overlap between the information (neural codes) detected by our features, or a combination of both. 3.3 Do the Features Sensitive to Temporal Variabilities Explain the Behavioral Recognition Performance More Accurately than the Mean Feature? Although we observed an advantage for the features that were sensitive to temporal variability (e.g. wavelet) over other, more summa- rized features (e.g. mean), this can all be a by-product of more flexibility (e.g. inclusion of both temporal and spatial codes) in the former over the latter, and not read out by downstream neurons that support behavior. To validate the behavioral relevance of the detected neural codes we calculated the correlation between the decoding accuracies of features and the reaction times of participants (Vidaurre, Myers, Stokes, Nobre, & Woolrich, 2019; Ritchie et al., 2015). Participants’ reaction times in object recognition have been previously shown to be predictable from decoding accuracy (Ritchie et al., 2015). We expected to observe negative correlations between the features’ decoding accuracies and participants’ reaction times in the post- stimulus span (Ritchie et al., 2015). This suggests that greater separabil- ity between neural representations of categories might lead to categorizing them faster in behavior, supporting that the decoded neural codes might be used by neurons that drive behavior. We used only data set 2 in this analysis, as it was the only data set with an active object detection task, so relevant re- action times were available. The (Spearman’s rank-order) correlations were calculated across the time course of the trials between the 10-dimensional vector of neural decoding accuracies obtained on every time point and the l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . / e d u n e c o a r t i c e - p d / l f / / / / 3 3 1 1 3 0 2 7 1 9 6 6 5 9 2 n e c o _ a _ 0 1 4 3 6 p d . / f b y g u e s t t o n 0 7 S e p e m b e r 2 0 2 3 3058 H. Karimi-Rouzbahani et al. 10-dimensional vector of behavioral reaction times, both obtained from the group of 10 participants (Cichy et al., 2014). This resulted in a single corre- lation value for each time point for the whole group of participants. All features except Katz FD showed negative trends after the stimulus onset (see Figure 4A). The correlations showed more sustained negative val- ues for the multivalued versus single-valued features (p < 0.05). There were also larger negative peaks (generally < −0.5) for multivalued features, es- pecially wavelet compared to other features (generally > −0.5). Spécifiquement,
while higher-order moment features (variance, skewness, and kurtosis), comme
well as many complexity features, showed earlier negative peaks at around
150 ms, mean, median, frequency-domain features, and multivalued fea-
tures showed later negative peaks after 300 ms. Donc, the multivalued
features, especially wavelet, which were sensitive to temporal variabilities
of the signals, showed the most sustained and significant correlations to
behavior.

Visual inspection suggests that features that provided a higher decod-
ing accuracy (par exemple., wavelet, Chiffre 3) also did better at predicting behav-
ioral performance (par exemple., wavelet, Chiffre 4). To quantitatively see if such a
relationship exists, we calculated the correlation between parameters of the
decoding curves (introduced in Figures 3B to 3D) and the average correla-
tion to behavior obtained by the same features (see Figure 4A). Spécifiquement,
we used the average decoding and maximum decoding accuracies, lequel
we hypothesized to predict average correlation to behavior. The rationale
behind this hypothesis was that more effective decoding of neural codes, comme
reflected in higher average decoding and maximum decoding accuracies
(voir la figure 3), should facilitate better prediction of behavior by detecting
subtle but overlooked behavior-relevant neural codes. As a control, we also
evaluated the time of first above-chance decoding and time of maximum
decoding accuracies, which we hypothesized not to correlate with average
correlation to behavior. Our rationale behind this prediction was that we
already observed relatively similar temporal dynamics for more and less
informative features of neural activity (see Figures 3D and 3E), suggérant
that all those features detect some aspects of the codes produced by similar
neural mechanisms.

To obtain the parameter of average correlation to behavior, we simply
averaged the correlation to behavior in the poststimulus time span for each
feature separately (see Figure 4A). Results showed that (see Figure 4B)
while the temporal parameters of time of first above-chance decoding and
time of maximum decoding (our control parameters) failed to predict the
level of average correlation to behavior (r = 0.24, p = 0.21, and r = 0.17,
p = 0.38, respectivement), the parameters of maximum decoding and aver-
age decoding accuracies significantly (r = −0.69 and r = −0.71 respectively,
with p < 0.0001; Pearson’s correlation) predicted the average correlation to behavior. Note the difference between the Spearman’s correlation to behav- ior calculated in Figure 4A and the correlations reported in Figure 4B. While l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . / e d u n e c o a r t i c e - p d / l f / / / / 3 3 1 1 3 0 2 7 1 9 6 6 5 9 2 n e c o _ a _ 0 1 4 3 6 p d . / f b y g u e s t t o n 0 7 S e p e m b e r 2 0 2 3 Additional Information in Temporal Variability of Evoked Potentials 3059 the Spearman’s correlation to behavior is obtained by correlating the time- resolved decoding rates and corresponding reaction times across partici- pants, the average correlation to behavior is calculated by correlating the poststimulus average of the former correlations and their corresponding decoding parameters across features rather than participants. This result suggests that the more effective the decoding of the neural codes, the bet- ter the prediction of behavior. This is not a trivial result; higher decoding values for the more informative features do not necessarily lead to higher correlation to behavior, as “correlation” normalizes the absolute values of input variables. 4 Discussion Temporal variability of neural activity has been suggested to provide an additional channel to the mean of activity for the encoding of several as- pects of the input sensory information. This includes complexity (Garrett et al., 2020), uncertainty (Orbán et al., 2016), and variance (Hermundstad et al., 2014) of the input information. It is suggested that the brain opti- mizes the neuronal activation and variability to avoid overactivation (en- ergy loss) for simple, familiar, and less informative categories of sensory inputs. For example, face images, which have less variable compositional features, evoked less variable responses in fMRI compared to house im- ages, which were more varied, even in a passive viewing task (Garrett et al., 2020). This automatic and adaptive modulation of neural variability can result in more effective and accurate encoding of the sensory inputs in changing environments–for example, by suppressing uninformative neu- ronal activation for less varied (more familiar) stimuli such as face ver- sus house images (Garrett et al., 2020). Despite the recent evidence about the richness of information in temporal variability, which is modulated by the category of the sensory input (Garrett et al., 2020; Orbán et al., 2016; Waschke et al., 2021), the majority of EEG studies still ignore variability in decoding. Specifically, they generally either extract variability (e.g., entropy and power) from the whole-trial activity (e.g., for brain-computer interface) or use the simple mean (average) magnitude data within subwindows of the trial (e.g., for time-resolved decoding; Grootswagers et al., 2017). The former can miss the informative within-trial variabilities and fluctuations of the trial in the highly dynamical and nonstationary evoked potentials. The latter may overlook the informative variabilities within the sliding time windows as a result of temporal averaging. Here, we quantified the advantage of the features sensitive to temporal variabilities over the conventional mean activity. In whole-trial analysis, we observed that the features, which targeted informative subwindows and samples of the trial (e.g., ERP components, wavelet coefficients (wavelet), and original magnitude data (Orig Mag), could provide more category information than the mean feature, which ignored temporal variabilities. l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . / e d u n e c o a r t i c e - p d / l f / / / / 3 3 1 1 3 0 2 7 1 9 6 6 5 9 2 n e c o _ a _ 0 1 4 3 6 p d . / f b y g u e s t t o n 0 7 S e p e m b e r 2 0 2 3 3060 H. Karimi-Rouzbahani et al. l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . / e d u n e c o a r t i c e - p d / l f / / / / 3 3 1 1 3 0 2 7 1 9 6 6 5 9 2 n e c o _ a _ 0 1 4 3 6 p d . / f b y g u e s t t o n 0 7 S e p e m b e r 2 0 2 3 Additional Information in Temporal Variability of Evoked Potentials 3061 Interestingly, ERP components (N1, P2a, and P2b) provided comparable re- sults to those obtained by informative samples (Orig Mag) or wavelet trans- formation (except for data set 3). That could be the reason for the remarkable decoding results achieved in previous studies that used ERPs (Wang et al., 2012; Qin et al., 2016) and wavelet (Taghizadeh-Sarabi et al., 2015). These results also propose that we might not need to apply complex transforma- tions (e.g., wavelet) on the data in whole-trial analysis (Taghizadeh-Sarabi et al., 2015), as comparable results can be obtained using simple ERP com- ponents or original magnitude data. However, inclusion of more dimen- sions of the features in decoding or combining them (Karimi Rouzbahani & Daliri, 2011; Qin et al., 2016) could potentially provide higher decoding ac- curacies for multivalued (e.g., wavelet; Taghizadeh-Sarabi et al., 2015) than ERP features (i.e., we equalized the dimensions across features here). The wavelet and Original magnitude data not only outperformed all the variability-sensitive features, but also the conventional mean feature. Im- portantly, while features such as Hilbert phase and amplitude, phase- and amplitude-locking, and interelectrode correlations also had access to all the samples within the sliding analysis window, they failed to provide informa- tion comparable to the Wavelet and Orig Mag features. The reason for the success of the original magnitude data seems to be that it basically makes no assumptions about the shape or pattern of the potential neural codes, as opposed to Hilbert phase (Hilb Phs), amplitude (Hilb Amp), and corre- lated variability (Cross Corr), each of which is sensitive to one specific as- pect of neural variability (phase, amplitude, correlation). The reason for the success of the wavelet feature seems to be its reasonable balance between flexibility in detecting potential neural codes contained in the amplitude, phase, and frequency/scale and a relatively lower susceptibility to noise as a result of filtering applied on different frequency bands (Guo et al., 2009). Together, these observations support the idea that neural codes are complex Figure 4: Correlation between the decoding accuracies and behavioral reaction times for data set 2 (other data sets did not have an active object recognition or detection task). (A) Top section in each panel shows the (Spearman’s) corre- lation coefficient obtained from correlating the decoding values and the reac- tion times for each feature separately. Correlation curves were obtained from the data of all participants. Bottom section shows positively or negatively sig- nificant (p < 0.05; filled circles) or nonsignificant (p > 0.05; open circles) cor-
relations as evaluated by random permutation of the variables in correlation.
(B) Correlation between each of the amplitude and timing parameters of time-
resolved decoding (c'est à dire., maximum and average decoding accuracy and time of
first and maximum decoding) with the average time-resolved correlations cal-
culated from panel for the set of N = 28 features. The slanted line shows the
best linear fit to the distribution of the data.

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

/

e
d
toi
n
e
c
o
un
r
t
je
c
e

p
d

/

je

F
/

/

/

/

3
3
1
1
3
0
2
7
1
9
6
6
5
9
2
n
e
c
o
_
un
_
0
1
4
3
6
p
d

.

/

F

b
oui
g
toi
e
s
t

t

o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3

3062

H. Karimi-Rouzbahani et al.

structures reflected in multiple aspects of EEG data such as amplitude,
phase, and frequency/scale (Panzeri et al., 2010; Waschke et al., 2021).

The advantage of theta over broad band in our data (see Supplementary
Figures 1 et 3) is consistent with previous monkey studies suggesting that
theta and gamma frequency bands played major roles in feedforward pro-
cessing of visual information in the brain (Bastos et al., 2015), which also
seemed dominant here (see Supplementary Figure 4). One potential reason
for the encoding of feedforward information in the theta band can be that
bottom-up sensory signals transfer information about ongoing experiences,
which might need to be stored in long-term memory for future use (Zheng
& Colgin, 2015). Long-term memories are suggested to be encoded by en-
hanced long-lasting synaptic connections. The optimal patterns of activity
that can cause such changes in synaptic weights were suggested to be suc-
cessive theta cycles that carry contents in fast gamma rhythms (∼100 Hz;
Larson, Wong, & Lynch, 1986). While direct correspondence between in-
vasive versus noninvasive neural data remains unclear (Ng, Logothetis, &
Kayser, 2013), this study provides additional evidence for the major role of
the theta frequency band in human visual perception (Wang et al., 2012; Qin
et coll., 2016; Jadidi et al., 2016; Taghizadeh-Sarabi et al., 2015; Torabi et al.,
2017). It also suggests that the BCI community might benefit from concen-
trating on specific frequency bands relevant to the cognitive or sensory pro-
cessing undergoing in the brain, c'est, investigating the theta band when
stimulating the visual system.

One critical question for cognitive neuroscience has been whether (if at
tous) neuroimaging data can explain behavior (Williams et al., 2007; Ritchie
et coll., 2015; Woolgar et al., 2019; Karimi-Rouzbahani et al., 2019; Karimi-
Rouzbahani, Ramezani, Woolgar, Rich, & Ghodrati, 2021). We extended this
question by asking whether more optimal decoding of object category infor-
mation can lead to better prediction of behavioral performance. We showed
in data set 2 that this can be the case. Critique, here we observed for the
same data set that there seems to be a linear relationship between the ob-
tainable decoding accuracy and the explanatory power of the features. Il
implies that in order to bring neuroimaging observations closer to behav-
ior, we might need to work on how we can read out the neural codes more
effectively.

It has been suggested that neural variability is modulated not only by
sensory information (as focused on here) but also by other top-down cog-
nitive processes such as attention, expectation, mémoire, and task demands
(Waschke et al. 2021). Par exemple, attention decreased low-frequency neu-
ral variabilities/power (2–10 Hz, referred to as “desynchronization”) alors que
increasing high-frequency neural variabilities/power (Wyart & Tallon-
Baudry, 2009). Donc, in the future, it will be interesting to know which
features best detect the modulation of neural variability in other cognitive
tasks. De plus, it is interesting to know how (if at all) a combination of the
features used in this study could provide any additional information about

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

/

e
d
toi
n
e
c
o
un
r
t
je
c
e

p
d

/

je

F
/

/

/

/

3
3
1
1
3
0
2
7
1
9
6
6
5
9
2
n
e
c
o
_
un
_
0
1
4
3
6
p
d

.

/

F

b
oui
g
toi
e
s
t

t

o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3

Additional Information in Temporal Variability of Evoked Potentials

3063

object categories or behavior. Autrement dit, although all of the individual
features evaluated here covered some variance of category object informa-
tion, to detect the neural information more effectively, it might be helpful
to combine multiple features using supervised and unsupervised methods
(Karimi Rouzbahani & Daliri, 2011; Qin et al., 2016).

The cross-data set, large-scale analysis methods implemented in this
study align with the growing trend toward meta-analysis in cognitive neu-
roscience. Recent studies have also adopted and compared several data sets
to facilitate forming more rigorous conclusions about how the brain per-
forms different cognitive processes such as sustained attention (Langner &
Eickhoff, 2013) or working memory (Adam, Vogel, & Awh, 2020). Our re-
sults provide evidence supporting the idea that neural variability seems to
be an additional channel for information encoding in EEG, which should
not be simply ignored.

Remerciements

H.K.-R. was funded by the Royal Society’s Newton International Fellow-
ship (SUAI/059/G101116) and MRC Cognition and Brain Sciences Unit.

Les références

Abootalebi, V., Moradi, M.. H., & Khalilzadeh, M.. UN. (2009). A new approach for
EEG feature extraction in P300-based lie detection. Computer Methods and Pro-
grams in Biomedicine, 94(1), 48–57. https://doi.org/10.1016/j.cmpb.2008.10.001,
PubMed: 19041154

Aboy, M., Hornero, R., Abásolo, D., & Álvarez, D. (2006). Interpretation of the
Lempel-Ziv complexity measure in the context of biomedical signal analysis.
IEEE Transactions on Biomedical Engineering, 53(11), 2282–2288. https://est ce que je.org/
10.1109/TBME.2006.883696, PubMed: 17073334

Adam, K. C., Vogel, E. K., & Awh, E. (2020). Multivariate analysis reveals a generaliz-
able human electrophysiological signature of working memory load. Psychophys-
iology, 57(12), e13691. https://doi.org/10.1111/psyp.13691

Ahmadi-Pajouh, M.. UN., Ala, T. S., Zamanian, F., Namazi, H., & Jafari, S.
(2018). Fractal-based classification of human brain response to living and
non-living visual stimuli. Fractals, 26(5), 1850069. https://doi.org/10.1142/
S0218348X1850069X

Alimardani, F., Cho, J.. H., Boostani, R., & Hwang, H. J.. (2018). Classification of bipo-
lar ?1217 disorder and schizophrenia using steady-state visual evoked potential
based features. IEEE Access, 6, 40379–40388.

Bastos, UN. M., & Schoffelen, J.. M.. (2016). A tutorial review of functional connectivity
analysis methods and their interpretational pitfalls. Frontiers in Systems Neuro-
science, 9, 175. https://doi.org/10.3389/fnsys.2015.00175, PubMed: 26778976
Bastos, UN. M., Vezoli, J., Bosman, C. UN., Schoffelen, J.. M., Oostenveld, R., Dowdall,
J.. R., . . . Fries, P.. (2015). Visual areas exert feedforward and feedback influences

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

/

e
d
toi
n
e
c
o
un
r
t
je
c
e

p
d

/

je

F
/

/

/

/

3
3
1
1
3
0
2
7
1
9
6
6
5
9
2
n
e
c
o
_
un
_
0
1
4
3
6
p
d

.

/

F

b
oui
g
toi
e
s
t

t

o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3

3064

H. Karimi-Rouzbahani et al.

through distinct frequency channels. Neurone, 85(2), 390–401. https://est ce que je.org/10.
1016/j.neuron.2014.12.018, PubMed: 25556836

Behroozi, M., Daliri, M.. R., & Shekarchi, B. (2016). EEG phase patterns reflect the
representation of semantic categories of objects. Medical and Biological Engineer-
ing and Computing, 54(1), 205–221. https://doi.org/10.1007/s11517-015-1391-7,
PubMed: 26400624

Bizas, E., Simos, P.. G., Stam, C. J., Arvanitis, S., Terzakis, D., & Micheloyannis, S.
(1999). EEG correlates of cerebral engagement in reading tasks. Brain Topography,
12(2), 99–105. https://doi.org/10.1023/a:1023410227707, PubMed: 10642009
Bruns, UN., Eckhorn, R., Jokeit, H., & Ebner, UN. (2000). Amplitude envelope correlation
detects coupling among incoherent brain signals. Neuroreport, 11(7), 1509–1514.
10841367

Carlson, T., Tovar, D. UN., Alink, UN., & Corps de guerre, N. (2013). Representational
dynamics of object vision: The first 1000 ms. Journal de vision, 13(10), 1–1.
https://doi.org/10.1167/13.10.1, PubMed: 23908380

Chan, UN. M., Halgren, E., Marinkovic, K., & Espèces, S. S. (2011). Decoding word
and category-specific spatiotemporal representations from MEG and EEG. Neu-
roImage, 54(4), 3028–3039. https://doi.org/10.1016/j.neuroimage.2010.10.073,
PubMed: 21040796

Calme, R.. M., Pantazis, D., & Oliva, UN. (2014). Resolving human object recognition in
space and time. Neurosciences naturelles, 17(3), 455. https://doi.org/10.1038/nn.3635,
PubMed: 24464044

Contini, E. W., Wardle, S. G., & Carlson, T. UN. (2017). Decoding the time-
course of object recognition in the human brain: From visual features to cat-
egorical decisions. Neuropsychologie, 105, 165–176. https://est ce que je.org/10.1016/j.
neuropsychologia.2017.02.013, PubMed: 28215698
(2014). Using Bayes to get

the most out of non-significant re-
sults. Frontiers in Psychology, 5, 781. https://doi.org/10.3389/fpsyg.2014.00781,
PubMed: 25120503

Dienes, Z.

Hart, P.. E., Stork, D. G., & Duda, R.. Ô. (2000). Pattern classification. Hoboken, New Jersey:

Wiley.

Eckhorn, R., Bauer, R., Jordan, W., Brosch, M., Kruse, W., Munk, M., & Reitboeck, H.
J.. (1988). Coherent oscillations: A mechanism of feature linking in the visual cor-
tex? Biological Cybernetics, 60(2), 121–130. https://doi.org/10.1007/BF00202899,
PubMed: 3228555

Ange, UN. K., Gerloff, C., Hilgetag, C. C., & Nolte, G. (2013). Intrinsic coupling
modes: Multiscale interactions in ongoing brain activity. Neurone, 80(4), 867–886.
https://doi.org/10.1016/j.neuron.2013.09.038, PubMed: 24267648

Fulcher, B. D., & Jones, N. S. (2017). HCTSA: A computational framework for au-
tomated time-series phenotyping using massive feature extraction. Cell Systems,
1(5), 527–531. https://doi.org/10.1016/j.cels.2017.10.001

Gabor, D. (1946). Theory of communication. Part 1: The analysis of information. Jour-
nal of the Institution of Electrical Engineers–Part III: Radio and Communication Engi-
neering, 93(26), 429–441. https://doi.org/10.1049/ji-3-2.1946.0074

Garrett, D. D., Epp, S. M., Kleemeyer, M., Lindenberger, U., & Polk, T. UN. (2020).
Higher performers upregulate brain signal variability in response to more

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

/

e
d
toi
n
e
c
o
un
r
t
je
c
e

p
d

/

je

F
/

/

/

/

3
3
1
1
3
0
2
7
1
9
6
6
5
9
2
n
e
c
o
_
un
_
0
1
4
3
6
p
d

.

/

F

b
oui
g
toi
e
s
t

t

o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3

Additional Information in Temporal Variability of Evoked Potentials

3065

feature-rich visual input. NeuroImage, 217, 116836. https://est ce que je.org/10.1016/j.
neuroimage.2020.116836

Gawne, T. J., Kjaer, T. W., & Richmond, B. J.. (1996). Latency: Another potential code
for feature binding in striate cortex. Journal de neurophysiologie, 76(2), 1356–1360.
https://doi.org/10.1152/jn.1996.76.2.1356, PubMed: 8871243

Gelman, UN., Hill, J., & Yajima, M.. (2012). Why we (usually) don’t have to worry about
multiple comparisons. Journal of Research on Educational Effectiveness, 5(2), 189–
211. https://doi.org/10.1080/19345747.2011.618213

Gelman, UN., & Tuerlinckx, F. (2000). Type S error rates for classical and Bayesian
single and multiple comparison procedures. Computational Statistics, 15(3), 373–
390. https://doi.org/10.1007/s001800000040

Grootswagers, T., Calme, R.. M., & Carlson, T. UN. (2018). Finding decodable informa-
tion that can be read out in behaviour. NeuroImage, 179, 252–262. https://est ce que je.org/
10.1016/j.neuroimage.2018.06.022

Grootswagers, T., Robinson, UN. K., & Carlson, T. UN. (2019). The representational dy-
namics of visual objects in rapid serial visual processing streams. NeuroImage, 188,
668–679. https://doi.org/10.1016/j.neuroimage.2018.12.046, PubMed: 30593903
Grootswagers, T., Wardle, S. G., & Carlson, T. UN. (2017). Decoding dynamic brain pat-
terns from evoked responses: A tutorial on multivariate pattern analysis applied
to time series neuroimaging data. Journal des neurosciences cognitives, 29(4), 677–697.
https://doi.org/10.1162/jocn_a_01068, PubMed: 27779910

Guo, L., Rivero, D., Seoane, J.. UN., & Pazos, UN. (2009). Classification of EEG signals
using relative wavelet energy and artificial neural networks. In Proceedings of the
First ACM/SIGEVO Summit on Genetic and Evolutionary Computation (pp. 177–184).
New York: ACM. https://doi.org/10.1145/1543834.1543860

Hacker, C. D., Snyder, UN. Z., Pahwa, M., Corbetta, M., & Leuthardt, E. C.
(2017). Frequency-specific electrophysiologic correlates of resting state fMRI net-
travaux. NeuroImage, 149, 446–457. https://doi.org/10.1016/j.neuroimage.2017.01.
054, PubMed: 28159686

Hart, P.. E., Stork, D. G., & Duda, R.. Ô. (2000). Pattern classification. Hoboken: Wiley.
Hatamimajoumerd, E., & Talebpour, UN. (2019). A temporal neural trace of wavelet
coefficients in human object vision: An MEG study. Frontiers in Neural Circuits,
13, 20. https://doi.org/10.3389/fncir.2019.00020, PubMed: 31001091

Hatamimajoumerd, E., Talebpour, UN., & Mohsenzadeh, Oui. (2020). Enhancing mul-
tivariate pattern analysis for magnetoencephalography through relevant sensor
selection. International Journal of Imaging Systems and Technology, 30(2), 473–494.
https://doi.org/10.1002/ima.22398

Haxby, J.. V., Gobbini, M.. JE., Furey, M.. L., Ishai, UN., Schouten, J.. L., & Pietrini,
P.. (2001). Distributed and overlapping representations of faces and objects in
ventral temporal cortex. Science, 293(5539), 2425–2430. https://doi.org/10.1126/
science.1063736, PubMed: 11577229

Haynes, J.. D., & Rees, G. (2006). Decoding mental states from brain activity in
humans. Nature Revues Neurosciences, 7(7), 523–534. https://doi.org/10.1038/
nrn1931, PubMed: 16791142

Hebart, M.. N., & Boulanger, C. je. (2018). Deconstructing multivariate decoding for
the study of brain function. NeuroImage, 180, 4–18. https://est ce que je.org/10.1016/j.
neuroimage.2017.08.005, PubMed: 28782682

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

/

e
d
toi
n
e
c
o
un
r
t
je
c
e

p
d

/

je

F
/

/

/

/

3
3
1
1
3
0
2
7
1
9
6
6
5
9
2
n
e
c
o
_
un
_
0
1
4
3
6
p
d

.

/

F

b
oui
g
toi
e
s
t

t

o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3

3066

H. Karimi-Rouzbahani et al.

Hermundstad, UN. M., Briguglio, J.. J., Conte, M.. M., Victor, J.. D., Balasubramanian,
V., & Tkaˇcik, G. (2014). Variance predicts salience in central sensory processing.
eLife, 3, e03722. https://doi.org/10.7554/eLife.03722, PubMed: 25396297

Higuchi, T. (1988). Approach to an irregular time series on the basis of the fractal
théorie. Physica D: Nonlinear Phenomena, 31(2), 277–283. https://est ce que je.org/10.1016/
0167-2789(88)90081-4

Hjorth, B. (1970). EEG analysis based on time domain properties. Électroencéphalologue-
raphy and Clinical Neurophysiology, 29(3), 306–310. https://est ce que je.org/10.1016/
0013-4694(70)90143-4, PubMed: 4195653

Hung, C. P., Kreiman, G., Poggio, T., & DiCarlo, J.. J.. (2005). Fast readout of ob-
ject identity from macaque inferior temporal cortex. Science, 310(5749), 863–866.
https://doi.org/10.1126/science.1117593, PubMed: 16272124

Intriligator, J., & Polich, J.. (1995). On the relationship between EEG and ERP vari-
ability. International Journal of Psychophysiology, 20(1), 59–74. https://est ce que je.org/10.
1016/0167-8760(95)00028-q, PubMed: 8543485

Iranmanesh, S., & Rodriguez-Villegas, E.

(2017). An ultralow–power sleep
spindle detection system on chip. IEEE Transactions on Biomedical Circuits
and Systems, 11(4), 858–866. https://doi.org/10.1109/TBCAS.2017.2690908,
PubMed: 28541914

Isik, L., Meyers, E. M., Leibo, J.. Z., & Poggio, T. (2014). The dynamics of invariant
object recognition in the human visual system. Journal de neurophysiologie, 111(1),
91–102. https://est ce que je.org/10.1152/jn.00394.2013

Jadidi, UN. F., Zargar, B. S., & Moradi, M.. H. (2016, Novembre). Categorizing visual
objets; Using ERP components. In Proceedings of the 2016 23rd Iranian Conference
on Biomedical Engineering and 2016 1st International Iranian Conference on Biomed-
ical Engineering (pp. 159–164). Piscataway, New Jersey: IEEE. https://doi.org/10.1109/
ICBME.2016.7890949

Jeffreys, H. (1998). The theory of probability. Oxford: Presse universitaire d'Oxford.
Joshi, D., Panigrahi, B. K., Anand, S., & Santhosh, J.. (2018). Classification of targets
and distractors present in visual hemifields using time-frequency domain EEG
features. Journal of Healthcare Engineering, 2018. https://doi.org/10.1155/2018/
9213707, PubMed: 29808111

Kaneshiro, B., Guimaraes, M.. P., Kim, H. S., Norcia, UN. M., & Suppes, P.. (2015). UN
representational similarity analysis of the dynamics of object processing using
single-trial EEG classification. PLOS One, 10(8), e0135697. https://est ce que je.org/10.
1371/journal.pone.0135697, PubMed: 26295970

Karaka¸s, S., Erzengin, Ö. U., & Ba¸sar, E. (2000). The genesis of human event-related
responses explained through the theory of oscillatory neural assemblies. Neu-
roscience Letters, 285(1), 45–48. https://doi.org/10.1016/s0304-3940(00)01022-3,
PubMed: 10788704

Karimi-Rouzbahani, H. (2018). Three-stage processing of category and varia-
tion information by entangled interactive mechanisms of peri-occipital and
peri-frontal cortices. Rapports scientifiques, 8(1), 1–22. https://doi.org/10.1038/
s41598-018-30601-8

Karimi-Rouzbahani, H., Bagheri, N., & Ebrahimpour, R.. (2017un). Average activity,
but not variability, is the dominant factor in the representation of object categories

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

/

e
d
toi
n
e
c
o
un
r
t
je
c
e

p
d

/

je

F
/

/

/

/

3
3
1
1
3
0
2
7
1
9
6
6
5
9
2
n
e
c
o
_
un
_
0
1
4
3
6
p
d

.

/

F

b
oui
g
toi
e
s
t

t

o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3

Additional Information in Temporal Variability of Evoked Potentials

3067

in the brain. Neurosciences, 346, 14–28. https://doi.org/10.1016/j.neuroscience.
2017.01.002, PubMed: 28088488

Karimi-Rouzbahani, H., Bagheri, N., & Ebrahimpour, R.. (2017b). Hard-wired feed-
forward visual mechanisms of the brain compensate for affine variations in object
reconnaissance. Neurosciences, 349, 48–63. https://doi.org/10.1016/j.neuroscience.
2017.02.050, PubMed: 28245990

Karimi-Rouzbahani, H., Bagheri, N., & Ebrahimpour, R.. (2017c). Invariant object
recognition is a personalized selection of invariant features in humans, not sim-
ply explained by hierarchical feed-forward vision models. Rapports scientifiques, 7(1),
1–24.

Karimi Rouzbahani, H., & Daliri, M.. R.. (2011). Diagnosis of Parkinson’s disease in
human using voice signals. Basic and Clinical Neuroscience, 2(3), 12–20. http://bcn.
iums.ac.ir/article-1-96-en.html

Karimi-Rouzbahani, H., Ramezani, F., Woolgar, UN., Rich, UN., & Ghodrati, M..
(2021). Perceptual difficulty modulates the direction of information flow in
familiar face recognition. NeuroImage, 233, 117896. https://est ce que je.org/10.1016/j.
neuroimage.2021.117896

Karimi-Rouzbahani, H., Vahab, E., Ebrahimpour, R., & Menhaj, M.. B. (2019). Spa-
tiotemporal analysis of category and target-related information processing in
the brain during object detection. Behavioural Brain Research, 362, 224–239.
https://doi.org/10.1016/j.bbr.2019.01.025

Karimi-Rouzbahani, H., Woolgar, UN., & Rich, UN. N. (2021). Neural signatures of vig-
ilance decrements predict behavioural errors before they occur. eLife, 10, e60563.
https://doi.org/10.7554/eLife.60563, PubMed: 33830017

Katz, M.. J.. (1988). Fractals and the analysis of waveforms. Computers in Biol-
ogy and Medicine, 18(3), 145–156. https://doi.org/10.1016/0010-4825(88)90041-8,
PubMed: 3396335

Kayser, C., Montemurro, M.. UN., Logothetis, N. K., & Panzeri, S. (2009). Spike-phase
coding boosts and stabilizes information carried by spatial and temporal spike
motifs. Neurone, 61(4), 597–608. https://doi.org/10.1016/j.neuron.2009.01.008,
PubMed: 19249279

Kiani, R., Esteky, H., Mirpour, K., & Tanaka, K. (2007). Object category structure in re-
sponse patterns of neuronal population in monkey inferior temporal cortex. Jour-
nal of Neurophysiology, 97(6), 4296–4309. https://est ce que je.org/10.1152/jn.00024.2007,
PubMed: 17428910

Kosciessa, J.. Q., Lindenberger, U., & Garrett, D. D. (2021). Thalamocortical excitabil-
ity modulation guides human perception under uncertainty. Nature Communica-
tion, 12(1), 1–15. https://doi.org/10.1038/s41467-021-22511-7

Corps de guerre, N., Mur, M., Ruff, D. UN., Kiani, R., Bodurka, J., Esteky, H., . . . Bandet-
tini, P.. UN. (2008). Matching categorical object representations in inferior temporal
cortex of man and monkey. Neurone, 60(6), 1126–1141. https://est ce que je.org/10.1016/j.
neuron.2008.10.043, PubMed: 19109916

Langner, R., & Eickhoff, S. B. (2013). Sustaining attention to simple tasks: A meta-
analytic review of the neural mechanisms of vigilant attention. Psychological Bul-
letin, 139(4), 870. https://doi.org/10.1037/a0030694, PubMed: 23163491

Larson, J., Wong, D., & Lynch, G. (1986). Patterned stimulation at the theta fre-
quency is optimal for the induction of hippocampal long-term potentiation.

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

/

e
d
toi
n
e
c
o
un
r
t
je
c
e

p
d

/

je

F
/

/

/

/

3
3
1
1
3
0
2
7
1
9
6
6
5
9
2
n
e
c
o
_
un
_
0
1
4
3
6
p
d

.

/

F

b
oui
g
toi
e
s
t

t

o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3

3068

H. Karimi-Rouzbahani et al.

Brain Research, 368(2), 347–350. https://doi.org/10.1016/0006-8993(86)90579-2,
PubMed: 3697730

Le Van Quyen, M., Foucher, J., Lachaux, J.. P., Rodriguez, E., Lutz, UN., Martinerie, J.,
& Varela, F. J.. (2001). Comparison of Hilbert transform and wavelet methods for
the analysis of neuronal synchrony. Journal of Neuroscience Methods, 111(2), 83–98.
https://doi.org/10.1016/S0165-0270(01)00372-7

Lee, M.. D., & Wagenmakers, E. J.. (2005). Bayesian statistical inference in psychology:
Comment on Trafimow (2003). Psychological Review, 112(3), 662–668. https://est ce que je.
org/10.1037/0033-295X.112.3.662, PubMed: 16060758

Lempel, UN., & Ziv, J.. (1976). On the complexity of finite sequences. IEEE Transactions
on Information Theory, 22(1), 75–81. https://doi.org/10.1109/TIT.1976.1055501
Liu, H., Agam, Y., Madsen, J.. R., & Kreiman, G. (2009). Timing, timing, timing:
Fast decoding of object information from intracranial field potentials in human
visual cortex. Neurone, 62(2), 281–290. https://doi.org/10.1016/j.neuron.2009.02.
025, PubMed: 19409272

Majima, K., Matsuo, T., Kawasaki, K., Kawai, K., Saito, N., Hasegawa, JE., & Kamitani,
Oui. (2014). Decoding visual object categories from temporal correlations of ECoG
signals. NeuroImage, 90, 74–83. https://doi.org/10.1016/j.neuroimage.2013.12.
020

Mazaheri, UN., & Jensen, Ô. (2008). Asymmetric amplitude modulations of brain oscil-
lations generate slow evoked responses. Journal des neurosciences, 28(31), 7781–7787.
https://doi.org/10.1523/JNEUROSCI.1631-08.2008, PubMed: 18667610

Miyakawa, N., Majima, K., Sawahata, H., Kawasaki, K., Matsuo, T., Kotake, N., . . .
Hasegawa, je. (2018). Heterogeneous redistribution of facial subcategory infor-
mation within and outside the face-selective domain in primate inferior tem-
poral cortex. Cortex cérébral, 28(4), 1416–1431. https://doi.org/10.1093/cercor/
bhx342, PubMed: 29329375

Montemurro, M.. UN., Rasch, M.. J., Murayama, Y., Logothetis, N. K., & Panzeri, S.
(2008). Phase-of-firing coding of natural visual stimuli in primary visual cor-
tex. Biologie actuelle, 18(5), 375–380. https://doi.org/10.1016/j.cub.2008.02.023,
PubMed: 18328702

Mostame, P., & Sadaghiani, S. (2020). Phase- and amplitude-coupling are tied
by an intrinsic spatial organization but show divergent stimulus-related
changes. NeuroImage, 219, 117051. https://doi.org/10.1016/j.cub.2008.02.023,
PubMed: 18328702

Namazi, H., Ala, T. S., & Bakardjian, H. (2018). Decoding of steady-state visual
evoked potentials by fractal analysis of the electroencephalographic (EEG) eux-mêmes-
nal. Fractals, 26(6), 1850092. https://doi.org/10.1142/S0218348X18500925

Ng, B. S. W., Logothetis, N. K., & Kayser, C. (2013). EEG phase patterns reflect the se-
lectivity of neural firing. Cortex cérébral, 23(2), 389–398. https://est ce que je.org/10.1093/
cercor/bhs031

Orbán, G., Berkes, P., Fiser, J., & Lengyel, M.. (2016). Neural variability and sampling-
based probabilistic representations in the visual cortex. Neurone, 92(2), 530–543.
https://doi.org/10.1016/j.neuron.2016.09.038, PubMed: 27764674

Panzeri, S., Brunel, N., Logothetis, N. K., & Kayser, C. (2010). Sensory neural
codes using multiplexed temporal scales. Trends in Neurosciences, 33(3), 111–120.
https://doi.org/10.1016/j.tins.2009.12.001, PubMed: 20045201

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

/

e
d
toi
n
e
c
o
un
r
t
je
c
e

p
d

/

je

F
/

/

/

/

3
3
1
1
3
0
2
7
1
9
6
6
5
9
2
n
e
c
o
_
un
_
0
1
4
3
6
p
d

.

/

F

b
oui
g
toi
e
s
t

t

o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3

Additional Information in Temporal Variability of Evoked Potentials

3069

Pincus, S. M., & Huang, W. M.. (1992). Approximate entropy: Statistical properties
and applications. Communications in Statistics–Theory and Methods, 21(11), 3061–
3077. https://doi.org/10.1080/03610929208830963

Pouryazdian, S., & Erfanian, UN. (2009). Detection of steady-state visual evoked po-
tentials for brain-computer interfaces using PCA and high-order statistics. Dans
World Congress on Medical Physics and Biomedical Engineering (pp. 480–483). Berlin:
Springer. https://doi.org/10.1007/978-3-642-03889-1_128

Preissel, H., Lutzenberger, W., Pulvermüller, F., & Birbaumer, N. (1997). Fractal di-
mensions of short EEG time series in humans. Neuroscience Letters, 225(2), 77–80.
Procaccia, je. (1988). Universal properties of dynamically complex systems: The orga-
nization of chaos. Nature, 333(6174), 618–623. https://doi.org/10.1038/333618a0
Pulini, UN. UN., Kerr, W. T., Loo, S. K., & Lenartowicz, UN. (2019). Classification accu-
racy of neuroimaging biomarkers in attention-deficit/hyperactivity disorder: Ef-
fects of sample size and circular analysis. Biological Psychiatry: Cognitive Neuro-
science and Neuroimaging, 4(2), 108–120. https://doi.org/10.1016/j.bpsc.2018.06.
003, PubMed: 30064848

Qin, Y., Zhan, Y., Wang, C., Zhang, J., Yao, L., Guo, X., . . . Hu, B. (2016). Clas-
sifying four-category visual objects using multiple ERP components in single-
trial ERP. Cognitive Neurodynamics, 10(4), 275–285. https://doi.org/10.1007/
s11571-016-9378-0, PubMed: 27468316

Racine, R.. (2011). Estimating the Hurst exponent. Zurich: Mosaic Group.
Rasoulzadeh, V. E. S. UN. L., Erkus, E. C., Yogurt, T. UN., Ulusoy, JE., & Zergero ˘glu, S. UN.
(2017). A comparative stationarity analysis of EEG signals. Annals of Operations
Research, 258(1), 133–157. https://doi.org/10.1007/s10479-016-2187-3

Richman, J.. S., & Moorman, J.. R.. (2000). Physiological time-series analysis us-
ing approximate entropy and sample entropy. American Journal of Physiology–
Heart and Circulatory Physiology, 278(6), H2039–2049. https://doi.org/10.1152/
ajpheart.2000.278.6.H2039, PubMed: 10843903

Ritchie, J.. B., Tovar, D. UN., & Carlson, T. UN. (2015). Emerging object representations in
the visual system predict reaction times for categorization. PLOS Computational
Biology, 11(6), e1004316. https://doi.org/10.1371/journal.pcbi.1004316

Robinson, UN. K., Grootswagers, T., & Carlson, T. UN. (2019). The influence of image
masking on object representations during rapid serial visual presentation. Neu-
roImage, 197, 224–231. https://doi.org/10.1016/j.neuroimage.2019.04.050

Rossion, B., Gauthier, JE., Tarr, M.. J., Despland, P., Bruyer, R., Linotte, S., & Crom-
melinck, M.. (2000). The N170 occipito-temporal component is delayed and
enhanced to inverted faces but not to inverted objects: An electrophysiological
account of face-specific processes in the human brain. Neuroreport, 11(1), 69–72.
https://doi.org/10.1097/00001756-200001170-00014, PubMed: 10683832

Rouder, J.. N., Morey, R.. D., Speckman, P.. L., & Province, J.. M.. (2012). Default Bayes
factors for ANOVA designs. Journal of Mathematical Psychology, 56(5), 356–374.
https://doi.org/10.1016/j.jmp.2012.08.001

Rousselet, G. UN., Husk, J.. S., Bennett, P.. J., & Sekuler, UN. B. (2007). Single-trial
EEG dynamics of object and face visual processing. NeuroImage, 36(3), 843–862.
https://doi.org/10.1016/j.neuroimage.2007.02.052, PubMed: 17475510

Rupp, K., Roos, M., Milsap, G., Caceres, C., Ratto, C., Chevillet, M., . . . Wolmetz, M..
(2017). Semantic attributes are encoded in human electrocorticographic signals

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

/

e
d
toi
n
e
c
o
un
r
t
je
c
e

p
d

/

je

F
/

/

/

/

3
3
1
1
3
0
2
7
1
9
6
6
5
9
2
n
e
c
o
_
un
_
0
1
4
3
6
p
d

.

/

F

b
oui
g
toi
e
s
t

t

o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3

3070

H. Karimi-Rouzbahani et al.

during visual object recognition. NeuroImage, 148, 318–329. https://est ce que je.org/10.
1016/j.neuroimage.2016.12.074, PubMed: 28088485

Sammer, G. (1999). Working memory load and EEG-dynamics as revealed by point
correlation dimension analysis. International Journal of Psychophysiology, 34(1), 89–
102. https://doi.org/10.1016/s0167-8760(99)00039-2, PubMed: 10555877

Shourie, N., Firoozabadi, M., & Badie, K. (2014). Analysis of EEG signals related
to artists and nonartists during visual perception, mental imagery, and rest us-
ing approximate entropy. BioMed Research International, 2014, 764382. https://est ce que je.
org/10.1155/2014/764382

Siegel, M., Donner, T. H., & Ange, UN. K. (2012). Spectral fingerprints of large-scale
neuronal interactions. Nature Revues Neurosciences, 13(2), 121–134. https://est ce que je.
org/10.1038/nrn3137

Siems, M., & Siegel, M.. (2020). Dissociated neuronal phase- and amplitude-coupling
patterns in the human brain. NeuroImage, 209, 116538. https://est ce que je.org/10.1016/
j.neuroimage.2020.116538, PubMed: 31935522

Simanova, JE., Van Gerven, M., Oostenveld, R., & Hagoort, P.. (2010). Identifying ob-
ject categories from event-related EEG: toward decoding of conceptual represen-
tations. PLOS One, 5(12), e14465. https://doi.org/10.1371/journal.pone.0014465
Stam, C. J.. (2000). Brain dynamics in theta and alpha frequency bands and working
memory performance in humans. Neuroscience Letters, 286(2), 115–118. https://
doi.org/10.1016/s0304-3940(00)01109-5, PubMed: 10825650

Stam, C. J.. (2005). Nonlinear dynamical analysis of EEG and MEG: Review of an
emerging field. Neurophysiologie clinique, 116(10), 2266–2301. https://est ce que je.org/10.
1016/j.clinph.2005.06.011, PubMed: 16115797

Stêpieñ, R.. UN. (2002). Testing for non-linearity in EEG signal of healthy subjects. Acta

neurobiologiae experimentalis, 62(4), 277–282. 12659294

Stewart, UN. X., Nuthmann, UN., & Sanguinetti, G. (2014). Single-trial classification
of EEG in a visual object task using ICA and machine learning. Journal de
Neuroscience Methods, 228, 1–14. https://doi.org/10.1016/j.jneumeth.2014.02.014,
PubMed: 24613798

Storey, J.. D. (2002). A direct approach to false discovery rates. Journal of the Royal Sta-
tistical Society: Série B (Statistical Methodology), 64(3), 479–498. https://est ce que je.org/
10.1111/1467-9868.00346

Subha, D. P., Joseph, P.. K., Acharya, R., & Lim, C. M.. (2010). EEG signal analy-
sis: A survey. Journal of Medical Systems, 34(2), 195–212. https://doi.org/10.1007/
s10916-008-9231-z, PubMed: 20433058

Szczepa ´nski, J., Amigó, J.. M., Wajnryb, E., & Sanchez-Vives, M.. V. (2003). Application
of Lempel–Ziv complexity to the analysis of neural discharges. Réseau: Compu-
tation in Neural Systems, 14(2), 335–350. 12790188

Tafreshi, T. F., Daliri, M.. R., & Ghodousi, M.. (2019). Functional and effec-
tive connectivity based features of EEG signals for object recognition. Cogni-
tive Neurodynamics, 13(6), 555–566. https://doi.org/10.1007/s11571-019-09556-7,
PubMed: 31741692

Taghizadeh-Sarabi, M., Daliri, M.. R., & Niksirat, K. S. (2015). Decoding objects of
basic categories from electroencephalographic signals using wavelet transform
and support vector machines. Brain Topography, 28(1), 33–46. https://est ce que je.org/10.
1007/s10548-014-0371-9

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

/

e
d
toi
n
e
c
o
un
r
t
je
c
e

p
d

/

je

F
/

/

/

/

3
3
1
1
3
0
2
7
1
9
6
6
5
9
2
n
e
c
o
_
un
_
0
1
4
3
6
p
d

.

/

F

b
oui
g
toi
e
s
t

t

o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3

Additional Information in Temporal Variability of Evoked Potentials

3071

Tononi, G., & Edelman, G. M..

(1998). Consciousness and complexity. Sci-
1846–1851. https://doi.org/10.1126/science.282.5395.1846,

282(5395),
ence,
PubMed: 9836628

Torabi, UN., Jahromy, F. Z. and Daliri, M.. R., 2017. Semantic category-based classifica-
tion using nonlinear features and wavelet coefficients of brain signals. Cognitive
Computation, 9(5), 702–711. https://doi.org/10.1007/s12559-017-9487-z

Victor, J.. D. (2000). How the brain uses time to represent and process visual infor-
mation. Brain Research, 886(1–2), 33–46. https://doi.org/10.1016/s0006-8993(00)
02751-7

Vidal, J.. R., Ossandón, T., Jerbi, K., Dalal, S. S., Minotti, L., Ryvlin, P., . . . Lachaux,
J.. P.. (2010). Category-specific visual responses: An intracranial study compar-
ing gamma, beta, alpha, and ERP response selectivity. Frontiers in Human Neu-
roscience, 4, 195. https://doi.org/10.3389/fnhum.2010.00195, PubMed: 21267419
Vidaurre, D., Myers, N. E., Stokes, M., Nobre, UN. C., & Woolrich, M.. W. (2019). Tempo-
rally unconstrained decoding reveals consistent but time-varying stages of stim-
ulus processing. Cortex cérébral, 29(2), 863–874. https://doi.org/10.1093/cercor/
bhy290, PubMed: 30535141

Voloh, B., Oemisch, M., & Womelsdorf, T. (2020). Phase of firing coding of learning
variables across the fronto-striatal network during feature-based learning. Na-
ture Communications, 11(1), 1–16. https://doi.org/10.1038/s41467-020-18435-3,
PubMed: 32938940

Wairagkar, M., Zoulias, JE., Oguntosin, V., Hayashi, Y., & Nasuto, S. (2016, Juin).
Movement intention based brain computer interface for virtual reality and soft
robotics rehabilitation using novel autocorrelation analysis of EEG. In Proceed-
ings of the 6th IEEE International Conference on Biomedical Robotics and Biomechatron-
ics (pp. 685–685). Piscataway, New Jersey: IEEE. https://doi.org/10.1109/BIOROB.2016.
7523705

Wang, C., Xiong, S., Hu, X., Yao, L., & Zhang, J.. (2012). Combining features from ERP
components in single-trial EEG for discriminating four-category visual objects.
Journal of Neural Engineering, 9(5), 056013. https://doi.org/10.1088/1741-2560/
9/5/056013, PubMed: 22983495

Wang, Y., Wang, P.. and Yu, Y., 2018. Decoding English alphabet letters using EEG
phase information. Frontiers in Neuroscience, 12, 62. https://doi.org/10.3389/
fnins.2018.00062, PubMed: 29467615

Wark, B., Fairhall, UN., & Rieke, F. (2009). Timescales of inference in visual adap-
tation. Neurone, 61(5), 750–761. https://doi.org/10.1016/j.neuron.2009.01.019,
PubMed: 19285471

Waschke, L., Kloosterman, N. UN., Obleser, J., & Garrett, D. D. (2021). Behavior needs
neural variability. Neurone, 109, 751–766. https://doi.org/10.1016/j.neuron.2021.
01.023, PubMed: 33596406

Waschke, L., Tune, S., & Obleser, J.. (2019). Local cortical desynchronization and
pupil-linked arousal differentially shape brain states for optimal sensory perfor-
mance. eLife, 8, e51501. https://doi.org/10.7554/eLife.51501, PubMed: 31820732
Watrous, UN. J., Tremper, L., Fell, J., & Axmacher, N. (2015). Phase-amplitude coupling

supports phase coding in human ECoG. eLife, 4, e07886.

Williams, M.. UN., Dang, S., & Kanwisher, N. G. (2007). Only some spatial patterns of
fMRI response are read out in task performance. Neurosciences naturelles, 10(6), 685–
686. https://doi.org/10.7554/eLife.12810, PubMed: 26544678

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

/

e
d
toi
n
e
c
o
un
r
t
je
c
e

p
d

/

je

F
/

/

/

/

3
3
1
1
3
0
2
7
1
9
6
6
5
9
2
n
e
c
o
_
un
_
0
1
4
3
6
p
d

.

/

F

b
oui
g
toi
e
s
t

t

o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3

3072

H. Karimi-Rouzbahani et al.

Wong, K. F. K., Galka, UN., Yamashita, O., & Ozaki, T. (2006). Modelling non-stationary
variance in EEG time series by state space GARCH model. Computers in Biology
and Medicine, 36(12), 1327–1335. https://doi.org/10.1016/j.compbiomed.2005.10.
001, PubMed: 16293239

Woolgar, UN., Dermody, N., Afshar, S., Williams, M.. UN., & Rich, UN. N. (2019).
Meaningful patterns of information in the brain revealed through analysis of errors.
bioRxiv:673681. https://doi.org/10.1101/673681

Wyart, V., & Tallon-Baudry, C. (2009). How ongoing fluctuations in human visual
cortex predict perceptual awareness: Baseline shift versus decision bias. Journal de
Neurosciences, 29(27), 8715–8725. https://doi.org/10.1523/JNEUROSCI.0962-09.
2009, PubMed: 19587278

Zellner, UN., & Siow, UN. (1980). Posterior odds ratios for selected regression
hypotheses. Trabajos de estadística y de investigación operativa, 31(1), 585–603.
https://doi.org/10.1007/BF02888369

Zheng, C., & Colgin, L. L. (2015). Beta and gamma rhythms go with the
flow. Neurone, 85(2), 236–237. https://doi.org/10.1016/j.neuron.2014.12.067,
PubMed: 25611505

Received October 27, 2020; accepted June 2, 2021.

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

/

e
d
toi
n
e
c
o
un
r
t
je
c
e

p
d

/

je

F
/

/

/

/

3
3
1
1
3
0
2
7
1
9
6
6
5
9
2
n
e
c
o
_
un
_
0
1
4
3
6
p
d

.

/

F

b
oui
g
toi
e
s
t

t

o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3LETTER image
LETTER image
LETTER image
LETTER image

Télécharger le PDF