Neural Correlates of Subsequent Memory-Related

Neural Correlates of Subsequent Memory-Related
Gaze Reinstatement

Jordana S. Wynn1*, Zhong-Xu Liu2*, and Jennifer D. Ryan3,4

Astratto

■ Mounting evidence linking gaze reinstatement—the recapit-
ulation of encoding-related gaze patterns during retrieval—to
behavioral measures of memory suggests that eye movements
play an important role in mnemonic processing. Yet, the nature
of the gaze scanpath, including its informational content and
neural correlates, has remained in question. In this study, we
examined eye movement and neural data from a recognition
memory task to further elucidate the behavioral and neural
bases of functional gaze reinstatement. Consistent with previ-
ous work, gaze reinstatement during retrieval of freely viewed
scene images was greater than chance and predictive of recog-
nition memory performance. Gaze reinstatement was also asso-
ciated with viewing of informationally salient image regions at
encoding, suggesting that scanpaths may encode and contain

high-level scene content. At the brain level, gaze reinstatement
was predicted by encoding-related activity in the occipital pole
and BG, neural regions associated with visual processing and
oculomotor control. Finalmente, cross-voxel brain pattern similarity
analysis revealed overlapping subsequent memory and subse-
quent gaze reinstatement modulation effects in the parahippo-
campal place area and hippocampus, in addition to the occipital
pole and BG. Together, these findings suggest that encoding-
related activity in brain regions associated with scene process-
ing, oculomotor control, and memory supports the formation,
and subsequent recapitulation, of functional scanpaths. More
broadly, these findings lend support to Scanpath Theory’s asser-
tion that eye movements both encode, and are themselves em-
bedded in, mnemonic representations.

INTRODUCTION

The human visual field is limited, requiring us to move our
eyes several times a second to explore the world around
us. This necessarily sequential process of selecting visual
features for fixation and further processing has important
implications for memory. Research using eye movement
monitoring indicates that, during visual exploration, fixa-
tions and saccades support the binding of salient visual
features and the relations among them into coherent
and lasting memory traces (per esempio., Liu, Rosenbaum, &
Ryan, 2020; Liu, Shen, Olsen, & Ryan, 2017; for a review,
see Wynn, Shen, & Ryan, 2019). Inoltre, such memory
traces may be stored and subsequently recapitulated as
patterns of eye movements or “scanpaths” at retrieval
(Noton & Stark, 1971UN, 1971B; for a review, see Wynn
et al., 2019). Specifically, when presented with a previously
encoded stimulus or a cue to retrieve a previously en-
coded stimulus from memory, humans (and nonhuman
primates; see Sakon & Suzuki, 2019) spontaneously repro-
duce the scanpath enacted during encoding (cioè., gaze
reinstatement), and this reinstatement is predictive of
mnemonic performance across a variety of tasks (per esempio.,
Wynn, Ryan, & Buchsbaum, 2020; Damiano & Walther,

1Harvard University, 2University of Michigan-Dearborn,
3Rotman Research Institute at Baycrest Health Sciences,
4University of Toronto
*Equal contribution.

© 2021 Istituto di Tecnologia del Massachussetts

2019; Wynn, Olsen, Binns, Buchsbaum, & Ryan, 2018;
Scholz, Mehlhorn, & Krems, 2016; Laeng, Bloem,
D’Ascenzo, & Tommasi, 2014; Olsen, Chiew, Buchsbaum,
& Ryan, 2014; Johansson & Johansson, 2013; Foulsham
et al., 2012; Laeng & Teodorescu, 2002; for a review, Vedere
Wynn et al., 2019). Although there is now considerable
evidence supporting a link between gaze reinstatement
(cioè., reinstatement of encoding gaze patterns during re-
trieval) and memory retrieval, investigations regarding the
neural correlates of this effect are recent and few (see Bone
et al., 2019; Ryals, Wang, Polnaszek, & Voss, 2015), and no
study to date has investigated the patterns of neural activity
at encoding that predict subsequent gaze reinstatement.
Così, to further elucidate the link between eye movements
and memory at the neural level, this study used concurrent
eye movement monitoring and fMRI to investigate the
neural mechanisms at encoding that predict functional gaze
reinstatement (cioè., gaze reinstatement that supports mne-
monic performance) at retrieval, in the vein of subsequent
memory studies (per esempio., Brewer, Zhao, Desmond, Glover, &
Gabrieli, 1998; Wagner et al., 1998; for a review, Vedere
Hannula & Duff, 2017).

Scanpaths have been proposed to at once contain, E
support the retrieval of, spatiotemporal contextual infor-
mazione (Noton & Stark, 1971UN, 1971B). According to
Noton and Stark’s (1971UN, 1971B) seminal Scanpath
Theory, on which much of the current gaze reinstate-
ment literature is based (see Wynn et al., 2019),

Journal of Cognitive Neuroscience 34:9, pag. 1547–1562
https://doi.org/10.1162/jocn_a_01761

l

D
o
w
N
o
UN
D
e
D

F
R
o
M
H

T
T

P

:
/
/

D
io
R
e
C
T
.

M

io
T
.

e
D
tu

/
j

/

o
C
N
UN
R
T
io
C
e

P
D

l

F
/

/

/

3
4
9
1
5
4
7
2
0
3
7
4
5
1

/

/
j

o
C
N
_
UN
_
0
1
7
6
1
P
D

.

F

B

G
tu
e
S
T

T

o
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

scanpaths consist of both image features and the fixations
made to them as “an alternating sequence of sensory and
motor memory traces.” Consistent with this proposal, Rif-
search using eye movement monitoring and neuroimag-
ing techniques has established an important role for eye
movements in visual memory encoding (for a review, Vedere
Ryan, Shen, & Liu, 2020; Meister & Buffalo, 2016). For ex-
ample, at the behavioral level, recognition memory accu-
racy is significantly attenuated when eye movements
during encoding are restricted (per esempio., to a central fixation
cross) as opposed to free (per esempio., Liu et al., 2020; Damiano
& Walther, 2019; Henderson, Williams, & Falk, 2005). A
the neural level, restricting viewing to a fixed location
during encoding results in attenuated activity in brain
regions associated with memory and scene processing
including the hippocampus (HPC) and parahippocampal
place area (PPA), as well as reduced functional connectiv-
ity between these regions and other cortical regions (Liu
et al., 2020). When participants are free to explore, how-
ever, the number of fixations executed is positively pre-
dictive of subsequent memory performance (per esempio.,
Fehlmann et al., 2020; Liu et al., 2017; Olsen et al., 2016;
Loftus, 1972) and of activity in the HPC (Liu et al., 2017,
2020; see also Olsen et al., 2016) and medial temporal lobe
(Fehlmann et al., 2020), suggesting that eye movements are
critically involved in the accumulation and encoding of
visual feature information into lasting memory traces.
That the relationships between gaze fixations and recogni-
tion memory performance (per esempio., Wynn, Buchsbaum &
Ryan, 2021; see also Chan, Chan, Lee, & Hsiao, 2018) E
between gaze fixations and HPC activity (Liu, Shen, Olsen,
& Ryan, 2018) are reduced with age, despite an increase in
the number of fixations (per esempio., Firestone, Turk-Browne, &
Ryan, 2007; Heisz & Ryan, 2011), further suggests that these
effects extend beyond the effects of mere attention or
interesse.

Recent work suggests that eye movements not only play
an important role in memory encoding but also actively
support memory retrieval. Consistent with the Scanpath
Theory, several studies have provided evidence that gaze
patterns elicited during stimulus encoding are recapitu-
lated during subsequent retrieval and are predictive of
mnemonic performance (per esempio., Wynn et al., 2018, 2020;
Damiano & Walther, 2019; Scholz et al., 2016; Laeng
et al., 2014; Olsen et al., 2014; Johansson & Johansson,
2013; Foulsham et al., 2012; Laeng & Teodorescu, 2002;
for a review, see Wynn et al., 2019). In addition to
advancing a functional role for eye movements in mem-
ory retrieval, this literature has raised intriguing questions
regarding the nature of the scanpath and its role in mem-
ory. Per esempio, how are scanpaths created, and what
information do they contain? To answer these questions,
it is necessary not only to relate eye movement and be-
havioral patterns, as prior research has done, but also,
and perhaps more critically, to relate eye movement
and neural patterns. Yet, only two studies, to our knowl-
edge, have directly investigated the neural correlates of

gaze reinstatement, with both focusing on retrieval-related
activity patterns. In the first of these studies, Ryals et al.
(2015) demonstrated that trial-level variability in gaze sim-
ilarity (between previously viewed scenes and novel scenes
with similar feature configurations) was associated with
activity in the right HPC. Extending this work, Bone et al.
(2019) observed that gaze reinstatement (cioè., similarity be-
tween participant- and image-specific gaze patterns during
encoding and subsequent visualization) was positively cor-
related with whole-brain neural reinstatement (cioè., similar-
ity between image-specific patterns of brain activity evoked
during encoding and subsequent visualization) during a
visual imagery task. Considered together, these two studies
provide evidence that functional gaze reinstatement is re-
lated to neural activity patterns typically associated with
memory retrieval, suggesting a common mechanism.

Although there is now some evidence that mnemonic
retrieval processes support gaze reinstatement at the
neural level, the relationship between gaze reinstatement
and encoding-related neural activity has yet to be investi-
gated. Accordingly, this study used the data from Liu et al.
(2020) to elucidate the encoding mechanisms that
support the formation and subsequent recapitulation of
functional scanpaths. Participants encoded intact and
scrambled scenes under free or fixed (restricted) viewing
conditions (in the scanner) and subsequently completed a
recognition memory task with old (cioè., encoded) and new
(cioè., lure) images (outside the scanner). Previous analysis
of this data revealed that, when compared to free viewing,
restricting eye movements reduced activity in the HPC,
connectivity between the HPC and other visual and
memory regions, E, ultimately, subsequent memory
performance (Liu et al., 2020). These findings critically
suggest that eye movements and memory encoding are
linked at both the behavioral and neural levels. Here, we
extend this work further by investigating the extent to
which the patterns of eye movements, or scanpaths, Quello
are created at encoding are reinstated at retrieval to
support memory performance and also by investigating
the neural activity at encoding that predicts the subse-
quent reinstatement of scanpaths at retrieval.

A tal fine, we first computed the spatial similarity
between encoding and retrieval scanpaths (containing
information about fixation location and duration) E
used this measure to predict recognition memory accu-
racy. On the basis of prior evidence of functional gaze
reinstatement, we predicted that gaze reinstatement
would be both greater than chance and positively corre-
lated with recognition of old images. To further interro-
gate the nature of information represented in the
scanpath, we additionally correlated gaze reinstatement
with measures of visual (cioè., stimulus-driven; bottom–up)
and informational (cioè., participant-driven; bottom–up and
top–down) saliency. Given that prior work has revealed a
significant role for top–down features (per esempio., Senso,
Henderson & Hayes, 2018; scene content, O’Connell &
Walther, 2015) in guiding eye movements, above and

1548

Journal of Cognitive Neuroscience

Volume 34, Numero 9

l

D
o
w
N
o
UN
D
e
D

F
R
o
M
H

T
T

P

:
/
/

D
io
R
e
C
T
.

M

io
T
.

e
D
tu

/
j

/

o
C
N
UN
R
T
io
C
e

P
D

l

F
/

/

/

3
4
9
1
5
4
7
2
0
3
7
4
5
1

/

/
j

o
C
N
_
UN
_
0
1
7
6
1
P
D

.

F

B

G
tu
e
S
T

T

o
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

beyond bottom–up image features (per esempio., luminance, con-
trast; Itti & Koch, 2000), we hypothesized that gaze rein-
statement would be related particularly to the viewing of
informationally salient image regions. Finalmente, to uncover
the neural correlates of functional gaze reinstatement,
we analyzed neural activity patterns at encoding, both
across the whole brain and in memory-related ROIs (cioè.,
HPC, PPA; see Liu et al., 2020), to identify brain regions
Quello (1) predicted subsequent gaze reinstatement at
retrieval and (2) showed overlapping subsequent gaze
reinstatement and subsequent memory effects. Given that
previous work has linked gaze scanpaths, as a critical
component of mnemonic representations, to successful
encoding and retrieval, we hypothesized that functional
gaze reinstatement would be supported by encoding-
related neural activity in brain regions associated with
visual processing (cioè., ventral visual stream regions) E
memory (cioè., medial temporal lobe regions). By linking
the neural correlates and behavioral outcomes of gaze
reinstatement, this study provides novel evidence in
support of Noton and Stark’s assertion that scanpaths
both serve to encode and are themselves encoded into
memory, allowing them to facilitate retrieval via recapitu-
lation and reactivation of informationally salient image
caratteristiche.

METHODS

Participants
Participants were 36 young adults (22 women) aged 18–
35 years (M = 23.58 years, SD = 4.17) with normal or

corrected-to-normal vision and no history of neurological
or psychiatric disorders. All participants were recruited
from the University of Toronto and surrounding
Toronto area community and were given monetary com-
pensation for their participation in the study. All partici-
pants provided written informed consent in accordance
with the Research Ethic Board at the Rotman Research
Institute at Baycrest Health Sciences.

Stimuli

Stimuli consisted of 864, 500 × 500-pixel, colored images,
made up of 24 images of each of 36 semantic scene cate-
sanguinose (per esempio., living room, arena, warehouse), varying along
the feature dimensions of size and clutter (six levels per
dimension = 36 unique feature level combinations, bal-
anced across conditions).1 Within each scene category,
eight images were assigned to the free-viewing encoding
condition and eight images were assigned to the fixed-
viewing encoding condition; images were randomly as-
signed to eight fMRI encoding runs (36 images per run
per viewing condition). The remaining eight images in
each scene category were used as novel lures at retrieval.
One hundred forty-four scene images from encoding (72
images per viewing condition from two randomly selected
encoding runs) E 72 scene images from retrieval (two
per scene category) were scrambled using six levels of tile
size (Guarda la figura 1). Così, in total, 432 intact scene images
E 144 scrambled color-tile images were viewed at
encoding, balanced across free- and fixed-viewing condi-
zioni, E 648 intact scene images (432 old and 216 novel

Figura 1. (UN) Visualization of the experimental procedure for the in-scan encoding task. Before each trial, a green or red fixation cross was presented
on the screen indicating whether participants would be required to freely view (green) or maintain fixation (red) during presentation of the
upcoming image. Note that although fixations are presented centrally here, during the experiment, they were presented in a random location within
a 100-pixel radius around the center of the screen. Participants completed six runs of scenes and two runs of scrambled color-tile images, consisting
Di 72 images each. (B) Visualization of the gaze reinstatement analysis, with one example each of a high similarity score and a low similarity score.
Reinstatement (cioè., similarity) scores reflect the spatial overlap between patterns of fixations (defined by location and duration) corresponding to
the same image viewed by the same participant during encoding and retrieval, controlling for image-invariant (idiosyncratic) viewing biases (per esempio.,
center bias).

Wynn, Liu, and Ryan

1549

l

D
o
w
N
o
UN
D
e
D

F
R
o
M
H

T
T

P

:
/
/

D
io
R
e
C
T
.

M

io
T
.

e
D
tu

/
j

/

o
C
N
UN
R
T
io
C
e

P
D

l

F
/

/

/

3
4
9
1
5
4
7
2
0
3
7
4
5
1

/

/
j

o
C
N
_
UN
_
0
1
7
6
1
P
D

.

F

B

G
tu
e
S
T

T

o
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

lure) E 216 scrambled color-tile images (144 old and 72
novel lure) were viewed at retrieval. All images were bal-
anced for low-level image properties (per esempio., luminance,
contrasto)2 and counterbalanced across participants (for
assignment to experimental/stimulus conditions).

Procedure

In-scan Scene Encoding Task

Participants completed eight encoding runs in the scan-
ner, six containing scene images and two containing
scrambled images (run order was randomized within par-
ticipants3; see Figure 1A). Within each run, participants
viewed 72 images, half of which were studied under
free-viewing instructions and half of which were studied
under fixed-viewing instructions. Before the start of each
trial, participants were presented with a fixation cross for
1.72–4.16 sec (exponential distribution, M = 2.63 sec)
presented in a random location within a 100-pixel
(1.59° visual angle) radius around the center of the
screen. The color of the cross indicated the viewing in-
structions for the following image, with free viewing indi-
cated by a green cross and fixed viewing indicated by a
red cross. After presentation of the fixation cross, a scene
or scrambled image appeared for 4 sec, during which
time participants were instructed to encode as much in-
formation as possible. If the image was preceded by a red
cross, participants were to maintain fixation on the loca-
tion of the cross for the duration of image presentation.
The length of each run was 500 sec, con 10 E 12.4 sec
added to the beginning and end of the run, rispettivamente.

Postscan Scene Recognition Task

After the encoding task, participants were given a 60-min
break before completing the retrieval task in a separate
testing room. For the retrieval task, participants viewed
Tutto 576 images (432 scene images and 144 scrambled
color-tile images) from the encoding task along with 288
novel lure images (216 scene images and 72 scrambled
color-tile images), divided evenly into six blocks. Before
the start of each trial, participants were presented with a
fixation cross for 1.5 sec presented in a random location
within a 100-pixel radius around the center of the screen
(for old trials, the fixation cross was presented at the same
location in which it was presented during the encoding
task). After presentation of the fixation cross, a scene or
scrambled image (either old, cioè., previously viewed during
encoding, or novel lure) appeared for 4 sec. Participants
were given 3 sec to indicate whether the presented image
was “old” or “new” and rate their confidence in that
risposta, via keypress (z = high confidence “old,” x =
low confidence “old,” n = high confidence “new,” m =
low confidence “new”). To quantify recognition memory
for old images, points were assigned to each response as
follows: z = 2, x = 1, m = 0, and n = −1.

Eye-tracking Procedure

During the encoding task, monocular eye movements
were recorded inside the scanner using the Eyelink
1000 MRI-compatible remote eye tracker with a 1000-Hz
sampling rate (SR Research Ltd.). The eye tracker was
placed inside the scanner bore (behind the participant’s
head) and detected the pupil and corneal reflection via a
mirror mounted on the head coil. During the retrieval
task, monocular eye movements were recorded using
the Eyelink II head-mounted eye tracker with a 500-Hz
sampling rate (SR Research Ltd.). To ensure successful
tracking during both the encoding and retrieval tasks,
9-point calibration was performed before the start of the
task. Online manual drift correction to the location of the
upcoming fixation cross was performed between trials
when necessary. As head movements were restricted in
the scanner, drift correction was rarely performed.
Saccades greater than 0.5° of visual angle were identified
by Eyelink as eye movements having a velocity threshold
of 30°/sec, an acceleration threshold of 8000°/sec, and a
saccade onset threshold of 0.15°. Blinks were defined as
periods in which the saccade signal was missing for three
or more consecutive samples. All remaining samples (non
identified as a saccade or blink) were classified as
fixations.

MRI Protocol

As specified in Liu et al. (2020), a 3-T Siemens MRI scanner
with a standard 32-channel head coil was used to acquire
both structural and functional images. For structural
T1-weighted high-resolution MRI images, we used a stan-
dard 3-D magnetization prepared rapid gradient echo pulse
sequence with170 slices and using field of view = 256 ×
256 mm, 192 × 256 matrix, 1-mm isotropic resolution, echo
time/repetition time = 2.22/200 msec, flip angle = 9°, E
scan time = 280 sec. Functional images were obtained
using T2*-weighted EPI acquisition protocol with repetition
time = 2000 msec, echo time = 27 msec, flip angle = 70°,
and field of view = 192 × 192 con 64 × 64 matrix (3 mm ×
3 mm in-place resolution, slice thickness = 3.5 mm with
no gap). Two hundred fifty volumes were acquired for
each run. Both structural and functional images were
acquired in an oblique orientation 30° clockwise to the
AC–PC axis. Stimuli were presented with Experiment
Builder (SR Research Ltd.) back-projected to a screen
(projector resolution: 1024 × 768) and viewed with a
mirror mounted on the head coil.

Data Analysis

Gaze Reinstatement Analysis

To quantify the spatial overlap between the gaze patterns
elicited by the same participants viewing the same images
during encoding and retrieval, we computed gaze rein-
statement scores for each image for each participant.

1550

Journal of Cognitive Neuroscience

Volume 34, Numero 9

l

D
o
w
N
o
UN
D
e
D

F
R
o
M
H

T
T

P

:
/
/

D
io
R
e
C
T
.

M

io
T
.

e
D
tu

/
j

/

o
C
N
UN
R
T
io
C
e

P
D

l

F
/

/

/

3
4
9
1
5
4
7
2
0
3
7
4
5
1

/

/
j

o
C
N
_
UN
_
0
1
7
6
1
P
D

.

F

B

G
tu
e
S
T

T

o
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

Specifically, we computed the Fisher z-transformed
Pearson correlation between the duration-weighted fixa-
tion density (cioè., heat) map4 (σ = 80) for each image
for each participant during encoding and the correspond-
ing density map for the same image being viewed by
the same participant during retrieval (“match” similarity;
R eyesim package: https://github.com/ bbuchsbaum
/eyesim; see Figure 1B). Critically, although this measure
(“match” similarity) captures the overall similarity
between encoding and retrieval gaze patterns, it is possi-
ble that such similarity reflects participant-specific (per esempio.,
tendency to view each image from left to right), Immagine-
invariant (per esempio., tendency to preferentially view the center
of the screen) viewing biases. Così, to control for idiosyn-
cratic viewing tendencies (which were not of particular
interest for this study), we additionally computed the sim-
ilarity between participant- and image-specific retrieval
density maps and 50 other randomly selected encoding
density maps (within participant, stimulus type, and view-
ing condition). The resulting 50 scores were averaged to
yield a single “mismatch” similarity score for each partici-
pant for each image.

Match and mismatch similarity scores were contrasted
using an ANOVA with Similarity Value as the dependent
variable and Stimulus Type (scene, scrambled), Viewing
Condition (free, fixed), and Similarity Template (match,
mismatch) as the independent variables. For all subse-
quent analyses, gaze reinstatement was reported as the
difference between match and mismatch similarity
scores, thus reflecting the spatial similarity between en-
coding and retrieval scanpaths for the same participant
viewing the same image, controlling for idiosyncratic
viewing biases.

To investigate the effect of gaze reinstatement on mne-
monic performance, we ran a linear mixed effects model
(LMEM) on trial-level accuracy (coded for a linear effect:
high confidence miss = −1, low confidence miss = 0,
low confidence hit = 1, high confidence hit = 2) with fixed
effects including all interactions of gaze reinstatement
(match similarity − mismatch similarity; z scored), stimu-
lus type (scene*, scrambled), and viewing condition (free*,
fixed) as well as random effects including random inter-
cepts for participant and image. Backward model com-
parison (α = .05) was used to determine the most
parsimonious model ( p values approximated with the
lmerTest package; Kuznetsova, Brockhoff, & Christensen,
2017).

Saliency Analysis

To characterize gaze patterns at encoding, and specifically,
the type of information encoded into the scanpath, sa-
liency was computed for each image using two methods.
Primo, we used a leave-one-subject-out cross-validation
procedure to generate duration-weighted informational
saliency (participant data-driven) maps5 for each image
using the aggregated fixations of all participants

(excluding the participant in question) viewing that
image during encoding (mean number of fixations per
image = 204, aggregated from all included participants).
Secondo, we used the Saliency Toolbox ( Walther & Koch,
2006) to generate visual saliency maps by producing 204
pseudo-fixations for each image based on low-level image
properties including color, intensity, and orientation.
Critically, whereas the stimulus (Saliency Toolbox)-guidato
saliency map takes into account primarily bottom–up
stimulus features (per esempio., luminance, contrasto), the partici-
pant data-driven saliency map takes into account any fea-
tures ( bottom–up or top–down) that might attract
viewing for any reason (per esempio., semantic meaning, memory).
To quantify the extent to which individual gaze patterns
during encoding were guided by salient bottom–up and
top–down features, participant- and image-specific en-
coding gaze patterns were correlated with both the infor-
mational (participant data-driven) and visual (stimulus
[Saliency Toolbox]-guidato) saliency maps in the same
manner as the gaze reinstatement analysis described
above. This analysis yielded two scores per participant
per image reflecting the extent to which fixations at
encoding were guided by high-level image features
(cioè., informational saliency; based on the data-driven
saliency map) and low-level image features (cioè., visual
saliency; based on the stimulus-driven saliency map).

To investigate the relationship between encoding gaze
patterns and gaze reinstatement, we ran an LMEM on
gaze reinstatement with visual and informational saliency
scores (z scored) as predictors. To compare the strength
of each saliency score in predicting gaze reinstatement,
saliency scores were dummy coded (visual saliency = 0,
informational saliency = 1). Random intercepts for partic-
ipant and image were also included in the model.

fMRI data preprocessing. The fMRI preprocessing pro-
cedure was previously reported in Liu et al. (2020); for
completeness, it is re-presented here. MRI images were
processed using SPM12 (Statistical Parametric Mapping,
Welcome Trust Center for Neuroimaging, Università
College London; www.fil.ion.ucl.ac.uk/spm/software
/spm12/ Version: 7487) in the MATLAB environment
(The MathWorks, Inc.). Following the standard SPM12
preprocessing procedure, slice timing was first corrected
using sinc interpolation with the midpoint slice as the
reference slice. Then, all functional images were aligned
using a six-parameter linear transformation. Prossimo, for
each participant, functional image movement parameters
obtained from the alignment procedure, as well as the
global signal intensity of these images, were checked
manually using the freely available toolbox ART (www
.nitrc.org/projects/artifact_detect/) to detect volumes
with excessive movement and abrupt signal changes.
Volumes indicated as outliers by ART default criteria were
excluded later from statistical analyses. Anatomical
images were coregistered to the aligned functional im-
ages and segmented into white matter, gray matter,

Wynn, Liu, and Ryan

1551

l

D
o
w
N
o
UN
D
e
D

F
R
o
M
H

T
T

P

:
/
/

D
io
R
e
C
T
.

M

io
T
.

e
D
tu

/
j

/

o
C
N
UN
R
T
io
C
e

P
D

l

F
/

/

/

3
4
9
1
5
4
7
2
0
3
7
4
5
1

/

/
j

o
C
N
_
UN
_
0
1
7
6
1
P
D

.

F

B

G
tu
e
S
T

T

o
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

cerebrospinal fluid, skull/bones, and soft tissues using
SPM12 default six-tissue probability maps. These seg-
mented images were then used to calculate the transforma-
tion parameters mapping from the individuals’ native space
to the Montreal Neurological Institute (MNI) template
spazio. The resulting transformation parameters were used
to transform all functional and structural images to the MNI
template. For each participant, the quality of coregistration
and normalization was checked manually and confirmed by
two research assistants. The functional images were finally
resampled at a 2 × 2 × 2 mm resolution and smoothed
using a Gaussian kernel with an FWHM of 6 mm. The first
five fMRI volumes from each run were discarded to allow
the magnetization to stabilize to a steady state, resulting
In 245 volumes in each run.

fMRI Analysis

Parametric modulation analysis. To interrogate our
main research question, questo è, which brain regions’ ac-
tivity during encoding was associated with subsequent
gaze reinstatement, we conducted a parametric modula-
tion analysis in SPM12. Specifically, we first added the
condition mean activation regressors for the free- E
fixed-viewing conditions, by convolving the onset of trials
of each condition with the canonical hemodynamic re-
sponse function in SPM12. We then added the trial-wise
gaze reinstatement measure as our interested linear mod-
ulator, which was also convolved with the hemodynamic
response function. We also added motion parameters, COME
detailed in Liu et al. (2020), as regressors of no interest.
Default high-pass filters with a cutoff of 128 sec and a
first-order autoregressive model AR(1) were also applied.
Using this design matrix, we first estimated the modula-
tion effect of gaze reinstatement at the individual level.
These beta estimates, averaged across all scene runs, were
then carried to the group-level analyses in which within-
participant t tests were used to examine which brain
regions showed stronger activity when greater gaze rein-
statement was observed. For this analysis, we primarily fo-
cused on the free-viewing scene condition as this is the
condition in which the gaze reinstatement measure is
most meaningful (because participants were allowed to
freely move their eyes). In this analysis, the HPC and
PPA served as our a priori ROIs (see Supplementary
Figure S1 in Liu et al., 2020). As specified in Liu et al.
(2020), the HPC ROI for each participant was obtained
using Freesurfer recon-all function, Version 6.0 (surfer
.nmr.mgh.harvard.edu.myaccess.library.utoronto.ca;
Fischl, 2012). The PPA ROIs were obtained using the
“scene versus scrambled color tile” picture contrast. IL
MNI coordinates for the peak activation of the PPA were
[32, −34, −18] for the right PPA and [−24, −46, −12]
for the left PPA. The left and right PPA ROIs contained
293 E 454 voxels, rispettivamente.

To explore whether other brain regions showed gaze
reinstatement modulation effects, in addition to the

ROI analysis, we also obtained voxel-wise whole-brain re-
sults. As an exploratory analysis, we used a relatively le-
nient threshold of p = .005 with a 10-voxel extension
(no correction), which can also facilitate future meta-
analyses (Lieberman & Cunningham, 2009).

Brain activation pattern similarity between parametric
modulation of gaze reinstatement and subsequent
memory. To understand the extent to which there
was similar modulation of brain activity by gaze reinstate-
ment and by subsequent memory, we calculated cross-
voxel brain activation pattern similarity between the
two parametric modulation effects. This analysis allowed
us to test whether the brain activity associated with the
two behavioral variables (cioè., trial-wise gaze reinstate-
ment and subsequent memory) shares a similar pattern.
Primo, we obtained subsequent memory modulation ef-
fects as detailed in Liu et al. (2020). Specifically, in this
subsequent memory effect analysis, we coded subse-
quent recognition memory for each encoding trial based
on participants’ hit/miss response and confidence (cor-
rect recognition with high confidence = 2, correction
recognition with low confidence = 1, missed recognition
with low confidence = 0, missed recognition with high
confidence = −1). We then used this measure as a linear
parametric modulator to find brain regions that showed a
subsequent memory effect, questo è, stronger activation
when trials were subsequently better remembered. Noi
averaged the subsequent memory effect estimates across
runs for each participant. We then extracted unthresh-
olded voxel-by-voxel subsequent memory effects and
gaze reinstatement effects (cioè., estimated betas) for the
HPC and PPA, separately. These beta values were then
vectorized, and Pearson correlations were calculated
between the two vectors of the two modulation effects
for each ROI. Finalmente, these Pearson correlations were
Fisher z transformed to reflect the cross-voxel pattern
similarity between the subsequent memory effect and
the gaze reinstatement modulation effect.

Although we mainly focused on the brain activation
pattern similarity between the two modulation effects
in the free-viewing scene condition, we also obtained
the same measure for the fixed-viewing scene condition
to provide a control condition. If the brain activation pat-
tern modulated by the gaze reinstatement measure is re-
lated to memory processing in the free-viewing scene
condition, it should show larger-than-zero pattern simi-
larity with the subsequent memory effects, which should
also be greater than those in the fixed-viewing scene con-
dizione. Therefore, at the group level, we used one-sample
t tests to examine whether the similarity z scores in the
free-viewing scene condition were larger than zero and
used a paired t test to compare the similarity scores
against those in the fixed-viewing scene condition.

In addition to the ROI brain activation pattern similarity,
we also examined brain activation similarity between
subsequent memory and gaze reinstatement for the

1552

Journal of Cognitive Neuroscience

Volume 34, Numero 9

l

D
o
w
N
o
UN
D
e
D

F
R
o
M
H

T
T

P

:
/
/

D
io
R
e
C
T
.

M

io
T
.

e
D
tu

/
j

/

o
C
N
UN
R
T
io
C
e

P
D

l

F
/

/

/

3
4
9
1
5
4
7
2
0
3
7
4
5
1

/

/
j

o
C
N
_
UN
_
0
1
7
6
1
P
D

.

F

B

G
tu
e
S
T

T

o
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

whole brain in each voxel using a searchlight analysis (IL
Decoding Toolbox v3.997; Hebart, Görgen, & Haynes,
2015). Specifically, for each voxel, we applied an 8-mm
spheric searchlight to calculate the across-voxel (voxels in-
cluded in this searchlight) brain activation pattern similar-
ity between the subsequent memory effect and the gaze
reinstatement modulation effect, using the same proce-
dure detailed above for the ROI analysis. We first generat-
ed the brain activation similarity z-score images for the
free- and fixed-viewing scene conditions separately for
each participant. At the group level, the individual partic-
ipants’ brain activation similarity z-score images were test-
ed against zero for the free-viewing scene condition and
compared to the similarity images in the fixed-viewing
scene condition using paired t tests. For this whole-brain
voxel-wise analysis, we used a threshold of p = .005 con un
10-voxel extension (uncorrected; see Lieberman &
Cunningham, 2009).

RESULTS

Behavioral Results

Results of the ANOVA on recognition memory perfor-
mance are reported in Liu et al., 2020. In short, a significant
interaction of Stimulus Type × Viewing Condition indi-
cated that recognition memory was significantly higher
in the free-viewing condition than in the fixed-viewing
condition, for scene images only, and for scenes relative
to scrambled images, for free-viewing only (see Figure 2E
in Liu et al., 2020).

Eye Movement Results

To determine whether gaze reinstatement was significantly
greater than chance, we ran an ANOVA with Similarity Value
as the dependent variable and Stimulus Type (scene, scram-
bled), Viewing Condition (free, fixed), and Similarity
Template (match, mismatch) as the independent variables.
If individual retrieval gaze patterns are indeed image

specific, they should be more similar to the gaze pattern
for the same image viewed at encoding (match) than for
other images within the same participant, image category,
and condition (mismatch). Results of the ANOVA revealed a
significant three-way interaction of Similarity Template,
Stimulus Type, and Viewing Condition, F(1, 34) = 7.09,
p = .012, ηp
2 = .17. Post hoc tests of the difference in mean
match and mismatch similarity scores indicated that match
similarity was significantly greater than mismatch similarity
in all conditions and categories [fixed scene: T(69.7) = 2.12,
p = .037; fixed scrambled: T(69.7) = 3.60, p = .001; free
scene: T(69.7) = 6.22, P < .001, see Figure 2A; free scram- bled: t(69.7) = 4.583, p < .001]. To explore the relationship between gaze reinstatement and mnemonic performance, we ran an LMEM on trial- level accuracy with interactions of gaze reinstatement (match similarity − mismatch similarity), stimulus type (scene*, scrambled), and viewing condition (free*, fixed) as fixed effects as well as participant and image as random effects. Results of the final best fit model indicated that accuracy was significantly greater for scenes relative to scrambled images (β = −0.24, SE = 0.03, t = −8.19, p < .001), and this effect was significantly attenuated for fixed viewing (Stimulus Type × Viewing Condition: β = 0.17, SE = 0.03, t = 5.10, p < .001). Accuracy was also sig- nificantly greater for free relative to fixed viewing (β = −0.17, SE = 0.16, t = −10.56, p < .001; see Figure 2A), and this effect was significantly attenuated for scrambled images (see Stimulus Type × Viewing Condition). Finally, the model revealed a significant positive effect of gaze reinstatement on accuracy (β = 0.06, SE = 0.01, t = 5.28, p < .001; see Figure 2B) for free-viewing scenes, and this effect was significantly attenuated for fixed viewing (Gaze Reinstatement × Viewing Condition: β = −0.04, SE = 0.14, t = −2.82, p = .005) and for scrambled images (Gaze Reinstatement × Stimulus Type: β = −0.06, SE = 0.16, t = −3.53, p < .001). The addition of number of gaze fixations to the model significantly improved the model fit (χ2 = 15.52, p < .001; see also Liu et al., 2020) but importantly did not abolish the effect of gaze Figure 2. Visualization of gaze reinstatement effect for free viewing of scenes. (A) Match similarity versus mismatch similarity scores. (B) Gaze reinstatement (match similarity − mismatch similarity) scores as a function of recognition memory accuracy. Sim = similarity; Hi Conf = high confidence; Lo Conf = low confidence. Wynn, Liu, and Ryan 1553 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / j / o c n a r t i c e - p d l f / / / 3 4 9 1 5 4 7 2 0 3 7 4 5 1 / / j o c n _ a _ 0 1 7 6 1 p d . f b y g u e s t t o n 0 8 S e p e m b e r 2 0 2 3 reinstatement. Furthermore, a correlation of mean gaze reinstatement scores and mean cumulative encoding gaze fixations was nonsignificant (r = .049, p = .79), sug- gesting that these effects were independent. To determine whether gaze reinstatement (i.e., the ex- tent to which encoding gaze patterns were recapitulated at retrieval) was related to gaze patterns (i.e., the types of information viewed) at encoding, we derived two mea- sures to capture the extent to which individual gaze pat- terns at encoding reflected “salient” image regions. Given that “saliency” can be defined by both bottom–up (e.g., bright) and top–down (e.g., meaningful) image features, with the latter generally outperforming the former in pre- dictive models (e.g., Henderson & Hayes, 2018; O’Connell & Walther, 2015), we computed two saliency maps for each image using the Saliency Toolbox (visual saliency map, reflecting bottom–up stimulus features) and aggregated participant data (informational saliency map, reflecting bottom–up and top–down features). Gaze patterns for each participant for each image were compared to both the visual and informational saliency maps, yielding two saliency scores. To probe the relation- ship between encoding gaze patterns and subsequent gaze reinstatement, we ran an LMEM on gaze reinstate- ment with saliency scores (visual*, informational) as fixed effects and participant and image as random effects. Results of the model revealed a significant effect of saliency on controlled gaze reinstatement (β = 0.10, SE = 0.01, t = 9.36, p < .001), indicating that similarity of individual encoding gaze patterns to the visual saliency map predicted subsequent gaze reinstatement at retrieval. Notably, the saliency effect was significantly increased when the informa- tional saliency map was used in place of the visual saliency map (β = 0.06, SE = 0.01, t = 5.01, p < .001), further indicating that gaze reinstatement is best predicted by encoding gaze patterns that prioritize “salient” image regions, being regions high in bottom–up and/or top–down informational content. fMRI Results To answer our main research question regarding the neu- ral activity patterns at encoding that predict subsequent gaze reinstatement (at retrieval), we first examined the brain regions in which activations during encoding were modulated by trial-wise subsequent gaze reinstatement scores (i.e., brain regions that showed stronger activation for trials with higher subsequent gaze reinstatement). Our ROI analyses did not yield significant effects for ei- ther the HPC or PPA (t = −0.31–1.13, p = .76–.26; Figure 3A). However, as evidenced by the whole-brain voxel-wise results (Figure 3B), the occipital poles bilater- ally showed a parametric modulation by subsequent gaze reinstatement at p = .005, with a 10-voxel extension (no correction). Two clusters in the BG also showed effects at this threshold. All regions that showed gaze reinstate- ment modulation effects at this threshold are presented in Table 1. As reported previously by Liu et al. (2020; see Figure 6A), both the PPA and HPC showed a parametric modulation by subsequent memory; that is, the PPA and HPC were activated more strongly for scenes that were later successfully recognized versus forgotten. Although PPA and HPC activation at the mean level were not modulated by subsequent gaze reinstatement, we investigated whether the variation across voxels within each ROI in supporting subsequent memory was similar to the varia- tion of these voxels in supporting subsequent gaze rein- statement. Critically, this cross-voxel brain modulation pattern similarity analysis can reveal whether the pattern of activation of voxels in an ROI contains shared informa- tion, or supports the overlap, between subsequent mem- ory and subsequent gaze reinstatement effects. Results of this analysis revealed significant pattern similarity be- tween the two modulation effects, subsequent memory and gaze reinstatement, in both the right PPA and right HPC, t = 2.37 and 3.31, and p = .024 and .002, respectively. Figure 3. Brain activation predicted by subsequent gaze reinstatement. (A) ROI analysis revealed no significant gaze reinstatement modulation effects for HPC and PPA (all ps > .05). (B) Voxel-wise whole-brain results for gaze reinstatement modulation (thresholded at p = .005, 10-voxel
extension, no corrections).

1554

Journal of Cognitive Neuroscience

Volume 34, Numero 9

l

D
o
w
N
o
UN
D
e
D

F
R
o
M
H

T
T

P

:
/
/

D
io
R
e
C
T
.

M

io
T
.

e
D
tu

/
j

/

o
C
N
UN
R
T
io
C
e

P
D

l

F
/

/

/

3
4
9
1
5
4
7
2
0
3
7
4
5
1

/

/
j

o
C
N
_
UN
_
0
1
7
6
1
P
D

.

F

B

G
tu
e
S
T

T

o
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

Tavolo 1. Brain Regions That Positively Predicted Trial-wise Gaze Reinstatement

Anatomical Areas

Cluster Size

Precentral_L

Putamen_R

Occipital_Inf_R

Occipital_Mid_L

Putamen_L

Caudate_R

Supp_Motor_Area_L

Putamen_R

63

91

180

145

161

12

12

14

MNI Coordinates

t Value

3.983363

3.919645

3.858175

3.834414

3.815405

3.431526

3.112625

2.931331

p Value

.000164

.000197

.000235

.000251

.000265

.000778

.00184

.002952

X

−30

28

32

−28

−26

14

−10

20

−4

2

−94

−98

12

6

16

8

z

42

16

−4

2

12

20

50

8

All clusters survived the threshold of p < .005, with a 10-voxel extension, no correction. The names of the anatomical regions in the table, obtained using the automated anatomical labeling (AAL) toolbox for SPM12, follow the AAL template naming convention (Tzourio-Mazoyer et al., 2002). R/L = right/left hemisphere; Mid = middle; Inf = inferior. The left HPC showed a marginally significant effect, t = 1.88, p = .069, whereas the left PPA similarity effect was not significant, t = 1.41, p = .17 (Figure 4A and B). Since the occipital pole and BG regions showed stron- ger mean level activation for trials with greater subse- quent gaze reinstatement, we also examined the pattern similarity in the voxel clusters in these two regions. Specifically, we obtained the two voxel clusters in the BG and the occipital pole that survived the threshold of p = .005 (no correction) in the gaze reinstatement mod- ulation analysis (Figure 3B) and then computed the pat- tern similarity scores as we did for the PPA and HPC (see above). Similar to the PPA and HPC results, the right BG and right occipital pole ROIs showed significant pattern similarity between the subsequent memory and subse- quent gaze reinstatement modulation effects, t = 2.45 and 2.36, and p = .02 and .024, respectively. The left ROIs did not show any significant results, p > .05
(Figure 4C and D).

Directly comparing the brain activation pattern similar-
ity between the free- versus fixed-viewing condition
revealed greater brain pattern similarity in the free- ver-
sus fixed-viewing condition for the right PPA, HPC, E
occipital pole regions (t = 3.84, 3.55, E 2.24, E
p = .0005, .001, E .032, rispettivamente). The left HPC
and a region in the left fusiform gyrus also showed

Figura 4. Pattern similarity
between gaze reinstatement
and subsequent memory
modulation effects. PPA =
parahippocampal place area;
HPC = hippocampus; Occip =
occipital; BG = basal ganglia;
+P < .09, *p < .05, **p < .005, ***p < .001. Wynn, Liu, and Ryan 1555 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / j / o c n a r t i c e - p d l f / / / 3 4 9 1 5 4 7 2 0 3 7 4 5 1 / / j o c n _ a _ 0 1 7 6 1 p d . f b y g u e s t t o n 0 8 S e p e m b e r 2 0 2 3 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / j / o c n a r t i c e - p d l f / / / 3 4 9 1 5 4 7 2 0 3 7 4 5 1 / / j o c n _ a _ 0 1 7 6 1 p d . f b y g u e s t t o n 0 8 S e p e m b e r 2 0 2 3 Figure 5. Whole-brain pattern similarity using a searchlight (8-mm sphere) between the subsequent memory and subsequent gaze reinstatement modulation effects (threshold p = .005, 10-voxel extension, no corrections). marginally significant effects (t = 2.02 and 1.91, and p = .051 and .065, respectively). occurred specifically in brain regions that are known to play key roles in visual memory encoding. As reported earlier, gaze reinstatement and memory performance were correlated at the behavioral level. Therefore, to ensure that the observed pattern similarity between the subsequent memory and gaze reinstatement modulation effects was specific to brain regions that were important for the scene encoding task, such as the ROIs tested above, and not general to all brain regions (i.e., re- flecting the shared variance between the two behavioral measures at the brain level), we employed a searchlight method in which a sphere with a radius of 8 mm was used to obtain the similarity value at each voxel of the brain. As shown in Figure 5A, not all brain regions showed the sim- ilarity effect. Instead, two large clusters in both the left and right HPC showed significant similarity between the subsequent memory and gaze reinstatement modulation effects (SPM small volume correction using HPC mask: cluster level pFWE-corr = .004 and .012, cluster size = 248 and 172 voxels). Other regions including regions in the ventral and dorsal visual stream also showed similar patterns. These results confirm that the pattern similarity effect (i.e., the brain manifestation of the shared variance between gaze reinstatement and memory performance) To further confirm the specificity of the pattern similarity effect, we conducted the same analysis for the fixed-viewing scene condition, which, consistent with our hypothesis, yielded no significant results in the HPC or in other ventral visual stream regions (Figure 5B). Directly contrasting the pattern similarity between the free- versus fixed-viewing conditions confirmed that the similarity between the subse- quent memory and subsequent gaze reinstatement mod- ulation effects was specific to brain regions typically implicated in scene encoding, such as the left and right HPC (SPM small volume correction using HPC mask: cluster level pFWE-corr = .023 and .022, cluster size = 126 and 130 voxels; Figure 5C), and specific to the free- viewing condition (Figure 5A and B). Notably, the occipital poles showed stronger activation bilaterally for subsequently remembered versus subse- quently forgotten trials (embedded brain image [right] in Figure 6) and for trials with stronger subsequent gaze reinstatement (embedded brain image [left] in Figure 6). This region also showed similar cross-voxel modulation patterns for the subsequent memory and gaze reinstate- ment effects. We thus hypothesized that the activation of Figure 6. Partial mediation effect of occipital pole activation on the predictive effect of gaze reinstatement on subsequent memory. The embedded brain activation image on the left shows the occipital clusters that were modulated by subsequent gaze reinstatement. The embedded brain activation image on the right shows the overlap between the occipital clusters that were modulated by subsequent gaze reinstatement (blue) and the clusters that showed subsequent memory effects (green). 1556 Journal of Cognitive Neuroscience Volume 34, Number 9 this region may mediate the relationship between gaze reinstatement and subsequent memory. To test this pre- diction, we conducted a mediation analysis in which we examined whether the effect of gaze reinstatement on subsequent memory could be significantly reduced when brain activity in the occipital pole, aggregated across the left and right, was entered as a mediator in the regression analysis. Specifically, for each participant, we first estimated brain activity for each scene image in each condition using the beta-series method (Rissman, Gazzaley, & D’Esposito, 2004). We then extracted the occipital pole activation corresponding to the left and right occipital pole ROIs. Next, at the individual level, we conducted a mediation analysis with the trial-wise gaze reinstatement measure as the predictor (x), occipital pole ROI activa- tion as the mediator (m), and the subsequent memory measure as the outcome variable ( y). The regression co- efficient a (see Figure 6) was obtained when x was used to predict m, b was obtained when m was used to predict 0 was obtained when x y (while controlling for x), and c was used to predict y (while controlling for m). Finally, 0 were averaged across runs the coefficients a, b, and c for each participant and then tested at the group level using t tests. In line with our prediction, occipital pole activation partially mediated the prediction of gaze rein- statement on subsequent memory (indirect path: t = 1.86, p = .035, one-tailed; Figure 6). DISCUSSION This study explored the neural correlates of functional gaze reinstatement—the recapitulation of encoding- related gaze patterns during retrieval that is significantly predictive of mnemonic performance. Consistent with the Scanpath Theory (Noton & Stark, 1971a, 1971b), re- search using eye movement monitoring has demon- strated that the spatial overlap between encoding and retrieval gaze patterns is correlated with behavioral per- formance across a number of memory tasks (e.g., Wynn et al., 2018, 2020; Damiano & Walther, 2019; Scholz et al., 2016; Laeng et al., 2014; Olsen et al., 2014; Johansson & Johansson, 2013; Foulsham et al., 2012; Laeng & Teodorescu, 2002; for a review, see Wynn et al., 2019). Indeed, guided or spontaneous gaze shifts to regions viewed during encoding (i.e., gaze reinstatement) have been proposed to support memory retrieval by reactivat- ing the spatiotemporal encoding context ( Wynn et al., 2019). In line with this proposal, recent work using con- current eye tracking and fMRI has indicated that gaze re- instatement elicits patterns of neural activity typically associated with successful memory retrieval, including HPC activity (Ryals et al., 2015) and whole-brain neural reactivation (Bone et al., 2019). Critically, however, these findings do not speak to the cognitive and neural pro- cesses at encoding that support the creation of functional scanpaths. This question is directly relevant to Scanpath Theory, which contends that eye movements not only facilitate memory retrieval but are themselves embedded in the memory trace (Noton & Stark, 1971a, 1971b). Accordingly, this study investigated the neural regions that support the formation and subsequent recapitula- tion of functional scanpaths. Extending earlier findings, and lending support to Scanpath Theory, here we show for the first time that functional gaze reinstatement is cor- related with encoding-related neural activity patterns in brain regions associated with sensory (visual) processing, motor (gaze) control, and memory. Importantly, these findings suggest that, like objects and the relations among them, scanpaths may be bound into memory rep- resentations, such that their recapitulation may cue, and facilitate the retrieval of, additional event elements (see Wynn et al., 2019). Consistent with previous work, this study found evi- dence of gaze reinstatement that was significantly greater than chance and significantly predictive of recognition memory accuracy when participants freely viewed repeated scenes. In addition, gaze reinstatement (measured during free viewing of scenes at retrieval) was positively associated with encoding-related neural activity in the BG and in the occipital pole. Previous work has linked the BG to voluntary saccade control, particularly when sac- cades are directed toward salient or rewarding stimuli (for a review, see Gottlieb, Hayhoe, Hikosaka, & Rangel, 2014), and to memory-guided attentional orient- ing (Goldfarb, Chun, & Phelps, 2016). Dense connections with brain regions involved in memory and oculomotor control including the HPC and FEFs (Shen, Bezgin, Selvam, McIntosh, & Ryan, 2016) make the BG ideally po- sitioned to guide visual attention to informative (i.e., high reward probability) image regions. The occipital pole has been similarly implicated in exogenous orienting (Fernández & Carrasco, 2020) and visual processing, in- cluding visual imagery (St-Laurent, Abdi, & Buchsbaum, 2015), partly because of the relationship between neural activity in the occipital pole and gaze measures including fixation duration (Choi & Henderson, 2015) and saccade length (Frey, Nau, & Doeller, 2020). Notably, the occipital pole region identified in the cur- rent study was spatially distinct from the occipital place area seen in other studies (e.g., Bonner & Epstein, 2017; Patai & Spiers, 2017; Dilks, Julian, Paunov, & Kanwisher, 2013), suggesting that it may differentially contribute to scene processing, possibly by guiding visual exploration. Moreover, the identified occipital pole region did not include area V1, suggesting that unlike (the number of ) gaze fixations, which modulate activity in early visual regions (Liu et al., 2017), gaze reinstate- ment does not directly reflect the amount of bottom–up visual input present at encoding. Rather, gaze reinstate- ment may be related more specifically to the selective sampling and processing of informative regions at encoding (see also Fehlmann et al., 2020). Indeed, during encoding, viewing of informationally salient regions, as defined by participant data-driven saliency maps Wynn, Liu, and Ryan 1557 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / j / o c n a r t i c e - p d l f / / / 3 4 9 1 5 4 7 2 0 3 7 4 5 1 / / j o c n _ a _ 0 1 7 6 1 p d . f b y g u e s t t o n 0 8 S e p e m b e r 2 0 2 3 capturing both low-level and high-level image features, was significantly more predictive of subsequent gaze reinstatement than viewing of visually salient regions, as defined by stimulus (Saliency Toolbox)-driven saliency maps capturing low-level image features. The occipital pole additionally partially mediated the effect of gaze re- instatement on subsequent memory, further suggesting that this region may contribute to mnemonic processes via the formation of gaze scanpaths reflecting informa- tionally salient image regions. Taken together with the neuroimaging results, these findings suggest that view- ing, and consequentially, encoding, regions high in infor- mational and/or rewarding content may facilitate the laying down of a scanpath that, when recapitulated, facil- itates recognition via comparison of presented visual in- put with stored features (see Wynn et al., 2019). To further interrogate the relationship between gaze reinstatement and memory at the neural level, we con- ducted a pattern similarity analysis to identify brain re- gions in which neural activity patterns corresponding to gaze reinstatement and those corresponding to subse- quent memory covaried. Results of this analysis revealed significant overlap between the subsequent memory and subsequent gaze reinstatement effects in the occipital pole and BG (regions that showed a parametric modula- tion by subsequent gaze reinstatement) and in the PPA and HPC (regions that showed a parametric modulation by subsequent memory; see Liu et al., 2020). These re- gions may therefore be important for scene encoding (see Liu et al., 2020), in part through their role in linking the scanpath to the resulting memory representation. Specifically, parametric modulation and pattern similarity effects in the occipital pole and BG suggest that, when in- formationally salient image features are selected for overt visual attention, those features are encoded into memory along with the fixations made to them, which are subse- quently recapitulated during retrieval. Consistent with Scanpath Theory’s notion of the scanpath as a sensory– motor memory trace (Noton & Stark, 1971a, 1971b), these findings suggest that eye movements themselves may be part and parcel of the memory representation. The absence of gaze reinstatement-related activity in object- or location-specific processing regions (e.g., PPA, lateral occipital cortex) or low-level visual regions (e.g., V1) further suggests that reinstated scanpaths (at least in the present task) cannot be solely attributed to overlap in bottom–up visual saliency or memory for particularly salient image features. Indeed, recent work from Wang, Baumgartner, Kaule, Hanke, and Pollmann (2019) indi- cates that simply following a face- or house-related gaze pattern (without seeing a face or house) is sufficient to elicit activity in the FFA or PPA, respectively, suggesting that visual identification is not based solely on visual fea- tures but rather can also be supported by efferent oculo- motor signals. The present findings further suggest that such signals, serving as a part of the memory representa- tion, may be referenced and used by the HPC, similar to other mnemonic features (e.g., spatial locations, temporal order; Yonelinas, 2013; Davachi, 2006), to cue retrieval of associated elements within memory. That is, although the HPC may not be directly involved in generating or storing the scanpath (which may instead rely on visual and oculo- motor regions), similar patterns of HPC activity that pre- dict subsequent gaze reinstatement and subsequent memory suggest that the HPC may index these oculomo- tor programs, along with other signals, in the service of mnemonic binding and retrieval functions (e.g., relative spatial position coding; see Connor & Knierim, 2017). Importantly, the finding that the HPC, in particular, similarly codes for subsequent memory and subsequent gaze reinstatement is consistent with its purported role in coordinating sensory and mnemonic representations (see Knapen, 2021). Indeed, early accounts positioned the HPC as the site at which already-parsed information from cortical processors are bound into lasting memory representations (Cohen & Eichenbaum, 1993). The no- tion that the oculomotor effector trace is included within the HPC representation is aligned with more recent work showcasing the inherent, and reciprocal, connections between the HPC and oculomotor systems. Research using computational modeling and network analyses, for example, indicates that the HPC and FEF are both anatomically and functionally connected (Ryan, Shen, Kacollja et al., 2020; Shen et al., 2016; for a review, see Ryan, Shen, & Liu, 2020). Indeed, whereas damage to the HPC leads to impairments on several eye-movement- based measures (e.g., Olsen et al., 2015, 2016; Hannula, Ryan, Tranel, & Cohen, 2007; Ryan, Althoff, Whitlow, & Cohen, 2000), disruption of the FEF (via TMS) leads to im- pairments in memory recall ( Wantz et al., 2016). Other work further suggests that visual and mnemonic processes share a similar reference frame, with connectivity between the HPC and V1 showing evidence of retinotopic orienta- tion during both visual stimulation and visual imagery (Knapen, 2021; see also Silson, Zeidman, Knapen, & Baker, 2021). That the HPC may serve as a potential “con- vergence zone” for binding disparate event elements, including eye movements, is further supported by evidence from intracranial recordings in humans and ani- mals suggesting that the coordination of eye movements with HPC theta rhythms supports memory encoding (Hoffman et al., 2013; Jutras, Fries, & Buffalo, 2013) and retrieval (Kragel et al., 2020) and by evidence of gaze- centric cells in the HPC (and entorhinal cortex; Meister & Buffalo, 2018; Killian, Jutras, & Buffalo, 2012) that re- spond to a particular gaze location (e.g., Chen & Naya, 2020; Rolls, Robertson, & Georges-François, 1997; for a re- view, see Nau, Julian, & Doeller, 2018). Extending this work, the present findings suggest that gaze reinstatement and subsequent memory share similar variance in the brain and may be supported by similar HPC mechanisms. Furthermore, these findings critically suggest that reinstat- ed gaze patterns may be recruited and used by the HPC in the service of memory retrieval. 1558 Journal of Cognitive Neuroscience Volume 34, Number 9 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / j / o c n a r t i c e - p d l f / / / 3 4 9 1 5 4 7 2 0 3 7 4 5 1 / / j o c n _ a _ 0 1 7 6 1 p d . f b y g u e s t t o n 0 8 S e p e m b e r 2 0 2 3 With this study, we provide novel evidence that encoding-related activity in the occipital pole and BG during free viewing of scenes is significantly predictive of subsequent gaze reinstatement, suggesting that scan- paths that are later recapitulated may contain important visuosensory and oculomotor information. Indeed, gaze reinstatement was correlated more strongly with encod- ing of informationally salient regions than visually salient regions, suggesting that the scanpath carries information related to high-level image content. Critically, visual, oc- ulomotor, and mnemonic ROIs (i.e., occipital pole, BG, PPA, HPC) showed similar patterns of activity corre- sponding to subsequent memory (see Liu et al., 2020) and subsequent gaze reinstatement, further supporting a common underlying neural mechanism. Lending sup- port to Scanpath Theory, the present results suggest that gaze scanpaths, beyond scaffolding memory retrieval, are themselves embedded in the memory representation (see Cohen & Eichenbaum, 1993), similar to other ele- ments, including spatial and temporal relations (see Yonelinas, 2013; Davachi, 2006), and may be utilized by the HPC to support memory retrieval. Given the nature of the present task, we focused here on the spatial over- lap (including fixation location and duration information) between gaze patterns during encoding and retrieval, but future work could also explore how temporal order infor- mation embedded in the scanpath may similarly or differ- entially contribute to memory retrieval. Thus, although further research will be needed to fully elucidate the neural mechanisms supporting functional gaze reinstate- ment, particularly across different tasks and populations, the current findings spotlight the unique interactions between overt visual attention and memory that extend beyond behavior to the level of the brain. Moreover, these findings speak to the importance of considering, and accounting for, effector systems, including the oculo- motor system, in models of memory and cognition more broadly. Acknowledgments This work was supported by a Vision: Science to Applications postdoctoral fellowship awarded to Z. X. L. Reprint requests should be sent to Jennifer D. Ryan, Rotman Research Institute, 3560 Bathurst St., Toronto, ON M6A 2E1, Canada, or via e-mail: jryan@research.baycrest.org. Author Contributions Jordana S. Wynn: Conceptualization; Data curation; Formal analysis; Investigation; Methodology; Validation; Visualization; Writing—Original draft; Writing—Review & editing. Zhong-Xu Liu: Conceptualization; Data cura- tion; Formal analysis; Investigation; Methodology; Validation; Visualization; Writing—Original draft; Writing —Review & editing. Jennifer D. Ryan: Conceptualization; Funding acquisition; Project administration; Resources; Supervision; Writing—Review & editing. Funding Information Jennifer D. Ryan, Natural Sciences and Engineering Research Council of Canada (https://dx.doi.org/10.13039 /501100000038), grant number: RGPIN-2018-06399. Jennifer D. Ryan, Canadian Institutes of Health Research (https://dx.doi.org/10.13039/501100000026), grant number: MOP126003. Diversity in Citation Practices A retrospective analysis of the citations in every article published in this journal from 2010 to 2020 has revealed a persistent pattern of gender imbalance: Although the proportions of authorship teams (categorized by estimated gender identification of first author/last author) publish- ing in the Journal of Cognitive Neuroscience ( JoCN ) during this period were M(an)/M = .408, W(oman)/M = .335, M/ W = .108, and W/ W = .149, the comparable pro- portions for the articles that these authorship teams cited were M/M = .579, W/M = .243, M/ W = .102, and W/ W = .076 (Fulvio et al., JoCN, 33:1, pp. 3–7). Consequently, JoCN encourages all authors to consider gender balance explicitly when selecting which articles to cite and gives them the opportunity to report their article’s gender ci- tation balance. The authors of this article report its pro- portions of citations by gender category to be as follows: M/M = .55, W/M = .175, M/ W = .075, and W/ W = .2. Notes 1. For further details regarding stimulus selection and feature equivalence, see Liu et al. (2020). 2. To achieve luminance and contrast balance, all color RGB im- ages were transferred to NTSC space using the built-in MATLAB function rgb2ntsc.m. Then, the luminance (i.e., the NTSC Y com- ponent) and contrast (i.e., the standard deviation of luminance) were obtained for each image, and the mean values were used to balance (i.e., equalize) the luminance and contrast for all images using SHINE toolbox ( Willenbockel et al., 2010). Finally, the im- ages were transferred back to their original RGB space using the MATLAB function ntsc2rgb.m. 3. For further details regarding the randomization procedure, see Liu et al. (2020). 4. For further details regarding the density map computation, see Wynn et al. (2020). * Reference variable. 5. Using the same density map computation as the gaze rein- statement analysis, see Wynn et al. (2020). REFERENCES Bone, M. B., St-Laurent, M., Dang, C., McQuiggan, D. A., Ryan, J. D., & Buchsbaum, B. R. (2019). Eye movement reinstatement and neural reactivation during mental imagery. Cerebral Cortex, 29, 1075–1089. https://doi.org/10.1093/cercor /bhy014, PubMed: 29415220 Bonner, M. F., & Epstein, R. A. (2017). Coding of navigational affordances in the human visual system. Proceedings of the National Academy of Sciences, U.S.A., 114, 4793–4798. https://doi.org/10.1073/pnas.1618228114, PubMed: 28416669 Wynn, Liu, and Ryan 1559 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / j / o c n a r t i c e - p d l f / / / 3 4 9 1 5 4 7 2 0 3 7 4 5 1 / / j o c n _ a _ 0 1 7 6 1 p d . f b y g u e s t t o n 0 8 S e p e m b e r 2 0 2 3 Brewer, J. B., Zhao, Z., Desmond, J. E., Glover, G. H., & Gabrieli, J. D. E. (1998). Making memories: Brain activity that predicts how well visual experience will be remembered. Science, 281, 1185–1187. https://doi.org/10.1126/science.281 .5380.1185, PubMed: 9712581 Chan, C. Y. H., Chan, A. B., Lee, T. M. C., & Hsiao, J. H. (2018). Eye-movement patterns in face recognition are associated with cognitive decline in older adults. Psychonomic Bulletin and Review, 25, 2200–2207. https://doi.org/10.3758/s13423 -017-1419-0, PubMed: 29313315 Chen, H., & Naya, Y. (2020). Automatic encoding of a view- centered background image in the macaque temporal lobe. Cerebral Cortex, 30, 6270–6283. https://doi.org/10.1093 /cercor/bhaa183, PubMed: 32637986 Choi, W., & Henderson, J. M. (2015). Neural correlates of active vision: An fMRI comparison of natural reading and scene viewing. Neuropsychologia, 75, 109–118. https://doi.org/10 .1016/j.neuropsychologia.2015.05.027, PubMed: 26026255 Cohen, N. J., & Eichenbaum, H. (1993). Memory, amnesia, and the hippocampal system. In Memory, amnesia, and the hippocampal system. Cambridge, MA: MIT Press. Connor, C. E., & Knierim, J. J. (2017). Integration of objects and space in perception and memory. Nature Neuroscience, 20, 1493–1503. https://doi.org/10.1038/nn.4657, PubMed: 29073645 Neuroscience, 34, 15497–15504. https://doi.org/10.1523 /JNEUROSCI.3270-14.2014, PubMed: 25392517 Hannula, D. E., & Duff, M. C. (Eds.). (2017). The hippocampus from cells to systems: Structure, connectivity, and functional contributions to memory and flexible cognition. https://doi.org/10.1007/978-3-319-50406-3 Hannula, D. E., Ryan, J. D., Tranel, D., & Cohen, N. J. (2007). Rapid onset relational memory effects are evident in eye movement behavior, but not in hippocampal amnesia. Journal of Cognitive Neuroscience, 19, 1690–1705. https:// doi.org/10.1162/jocn.2007.19.10.1690, PubMed: 17854282 Hebart, M. N., Görgen, K., & Haynes, J.-D. (2015). The decoding toolbox (TDT): A versatile software package for multivariate analyses of functional imaging data. Frontiers in Neuroinformatics, 8, 88. https://doi.org/10.3389/fninf.2014 .00088, PubMed: 25610393 Heisz, J. J., & Ryan, J. D. (2011). The effects of prior exposure on face processing in younger and older adults. Frontiers in Aging Neuroscience, 3, 15. https://doi.org/10.3389/fnagi.2011 .00015, PubMed: 22007169 Henderson, J. M., & Hayes, T. R. (2018). Meaning guides attention in real-world scene images: Evidence from eye movements and meaning maps. Journal of Vision, 18, 10. https://doi.org/10.1167/18.6.10, PubMed: 30029216 Damiano, C., & Walther, D. B. (2019). Distinct roles of eye Henderson, J. M., Williams, C. C., & Falk, R. J. (2005). Eye movements during memory encoding and retrieval. Cognition, 184, 119–129. https://doi.org/10.1016/j.cognition .2018.12.014, PubMed: 30594878 movements are functional during face learning. Memory & Cognition, 33, 98–106. https://doi.org/10.3758/BF03195300, PubMed: 15915796 Davachi, L. (2006). Item, context and relational episodic Hoffman, K. L., Dragan, M. C., Leonard, T. K., Micheli, C., encoding in humans. Current Opinion in Neurobiology, 16, 693–700. https://doi.org/10.1016/j.conb.2006.10.012, PubMed: 17097284 Dilks, D. D., Julian, J. B., Paunov, A. M., & Kanwisher, N. (2013). The occipital place area is causally and selectively involved in scene perception. Journal of Neuroscience, 33, 1331–1336. https://doi.org/10.1523/JNEUROSCI.4081-12.2013, PubMed: 23345209 Fehlmann, B., Coynel, D., Schicktanz, N., Milnik, A., Gschwind, L., Hofmann, P., et al. (2020). Visual exploration at higher fixation frequency increases subsequent memory recall. Cerebral Cortex Communications, 1, tgaa032. https://doi.org /10.1093/texcom/tgaa032, PubMed: 34296105 Fernández, A., & Carrasco, M. (2020). Extinguishing exogenous attention via transcranial magnetic stimulation. Current Biology, 30, 4078–4084. https://doi.org/10.1016/j.cub.2020.07 .068, PubMed: 32795447 Firestone, A., Turk-Browne, N. B., & Ryan, J. D. (2007). Age-related deficits in face recognition are related to underlying changes in scanning behavior. Neuropsychology, Development, and Cognition. Section B, Aging, Neuropsychology and Cognition, 14, 594–607. https://doi.org/10.1080/13825580600899717, PubMed: 18038358 Fischl, B. (2012). FreeSurfer. Neuroimage, 62, 774–781. https:// doi.org/10.1016/j.neuroimage.2012.01.021, PubMed: 22248573 Foulsham, T., Dewhurst, R., Nyström, M., Jarodzka, H., Johansson, R., Underwood, G., et al. (2012). Comparing scanpaths during scene encoding and recognition: A multi-dimensional approach. Journal of Eye Movement Research, 5, 1–14. https:// doi.org/10.16910/jemr.5.4.3 Frey, M., Nau, M., & Doeller, C. F. (2020). MR-based camera-less eye tracking using deep neural networks. BioRxiv. https://doi .org/10.1101/2020.11.30.401323 Goldfarb, E. V., Chun, M. M., & Phelps, E. A. (2016). Memory- guided attention: Independent contributions of the hippocampus and striatum. Neuron, 89, 317–324. https://doi .org/10.1016/j.neuron.2015.12.014, PubMed: 26777274 Gottlieb, J., Hayhoe, M., Hikosaka, O., & Rangel, A. (2014). Attention, reward, and information seeking. Journal of Montefusco-Siegmund, R., & Valiante, T. A. (2013). Saccades during visual exploration align hippocampal 3–8 Hz rhythms in human and non-human primates. Frontiers in Systems Neuroscience, 7, 43. https://doi.org/10.3389/fnsys.2013 .00043, PubMed: 24009562 Itti, L., & Koch, C. (2000). A saliency-based search mechanism for overt and covert shifts of visual attention. Vision Research, 40, 1489–1506. https://doi.org/10.1016/S0042-6989 (99)00163-7, PubMed: 10788654 Johansson, R., & Johansson, M. (2013). Look here, eye movements play a functional role in memory retrieval. Psychological Science, 25, 236–242. https://doi.org/10.1177 /0956797613498260, PubMed: 24166856 Jutras, M. J., Fries, P., & Buffalo, E. A. (2013). Oscillatory activity in the monkey hippocampus during visual exploration and memory formation. Proceedings of the National Academy of Sciences, U.S.A., 110, 13144–13149. https://doi.org/10.1073 /pnas.1302351110, PubMed: 23878251 Killian, N., Jutras, M., & Buffalo, E. (2012). A map of visual space in the primate entorhinal cortex. Nature, 491, 761–764. https:// doi.org/10.1038/nature11587.A, PubMed: 23103863 Knapen, T. (2021). Topographic connectivity reveals task- dependent retinotopic processing throughout the human brain. Proceedings of the National Academy of Sciences, U.S.A., 118, e2017032118. https://doi.org/10.1073/pnas .2017032118, PubMed: 33372144 Kragel, J. E., VanHaerents, S., Templer, J. W., Schuele, S., Rosenow, J. M., Nilakantan, A. S., et al. (2020). Hippocampal theta coordinates memory processing during visual exploration. eLife, 9, e52108. https://doi.org/10.7554/eLife .52108, PubMed: 32167568 Kuznetsova, A., Brockhoff, P. B., & Christensen, R. H. B. (2017). lmerTest package: Tests in linear mixed effects models. Journal of Statistical Software, 82, 1–26. https://doi.org/10 .18637/jss.v082.i13 Laeng, B., Bloem, I. M., D’Ascenzo, S., & Tommasi, L. (2014). Scrutinizing visual images: The role of gaze in mental imagery and memory. Cognition, 131, 263–283, https://doi.org/10.1016/j .cognition.2014.01.003, PubMed: 24561190 1560 Journal of Cognitive Neuroscience Volume 34, Number 9 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / j / o c n a r t i c e - p d l f / / / 3 4 9 1 5 4 7 2 0 3 7 4 5 1 / / j o c n _ a _ 0 1 7 6 1 p d . f b y g u e s t t o n 0 8 S e p e m b e r 2 0 2 3 Laeng, B., & Teodorescu, D.-S. (2002). Eye scanpaths during visual imagery reenact those of perception of the same visual scene. Cognitive Science, 26, 207–231. https://doi.org/10 .1016/S0364-0213(01)00065-9 Lieberman, M. D., & Cunningham, W. A. (2009). Type I and Type II error concerns in fMRI research: Re-balancing the scale. Social Cognitive and Affective Neuroscience, 4, 423–428. https://doi.org/10.1093/scan/nsp052, PubMed: 20035017 Liu, Z.-X., Rosenbaum, R. S., & Ryan, J. D. (2020). Restricting visual exploration directly impedes neural activity, functional connectivity, and memory. Cerebral Cortex Communications, 1, tgaa054. https://doi.org/10.1093 /texcom/tgaa054, PubMed: 33154992 Liu, Z.-X., Shen, K., Olsen, R. K., & Ryan, J. D. (2017). Visual sampling predicts hippocampal activity. Journal of Neuroscience, 37, 599–609. https://doi.org/10.1523/JNEUROSCI.2610-16.2017, PubMed: 28100742 Liu, Z.-X., Shen, K., Olsen, R. K., & Ryan, J. D. (2018). Age-related changes in the relationship between visual exploration and hippocampal activity. Neuropsychologia, 119, 81–91. https:// doi.org/10.1016/j.neuropsychologia.2018.07.032, PubMed: 30075215 Loftus, G. R. (1972). Eye fixations and recognition memory for pictures. Cognitive Psychology, 3, 525–551. https://doi.org/10 .1016/0010-0285(72)90021-7 Meister, M. L. R., & Buffalo, E. A. (2016). Getting directions from the hippocampus: The neural connection between looking and memory. Neurobiology of Learning and Memory, 134, 135–144. https://doi.org/10.1016/j.nlm.2015.12.004, PubMed: 26743043 Meister, M. L. R., & Buffalo, E. A. (2018). Neurons in primate entorhinal cortex represent gaze position in multiple spatial reference frames. Journal of Neuroscience, 38, 2430–2441. https://doi.org/10.1523/JNEUROSCI.2432-17.2018, PubMed: 29386260 Nau, M., Julian, J. B., & Doeller, C. F. (2018). How the brain’s navigation system shapes our visual experience. Trends in Cognitive Sciences, 22, 810–825. https://doi.org/10.1016/j.tics .2018.06.008, PubMed: 30031670 Noton, D., & Stark, L. (1971a). Scanpaths in eye movements during pattern perception. Science, 171, 308–311. https://doi .org/10.1126/science.171.3968.308, PubMed: 5538847 Noton, D., & Stark, L. (1971b). Scanpaths in saccadic eye movements while viewing and recognizing patterns. Vision Research, 11, 929–942. https://doi.org/10.1016/0042-6989(71) 90213-6, PubMed: 5133265 O’Connell, T. P., & Walther, D. B. (2015). Dissociation of salience-driven and content-driven spatial attention to scene category with predictive decoding of gaze patterns. Journal of Vision, 15, 20. https://doi.org/10.1167/15.5.20, PubMed: 26067538 Olsen, R. K., Chiew, M., Buchsbaum, B. R., & Ryan, J. D. (2014). The relationship between delay period eye movements and visuospatial memory. Journal of Vision, 14, 8. https://doi.org /10.1167/14.1.8, PubMed: 24403394 Olsen, R. K., Lee, Y., Kube, J., Rosenbaum, R. S., Grady, C. L., Moscovitch, M., et al. (2015). The role of relational binding in item memory: Evidence from face recognition in a case of developmental amnesia. Journal of Neuroscience, 35, 5342–5350. https://doi.org/10.1523/JNEUROSCI.3987-14 .2015, PubMed: 25834058 Olsen, R. K., Sebanayagam, V., Lee, Y., Moscovitch, M., Grady, C. L., Rosenbaum, R. S., et al. (2016). The relationship between eye movements and subsequent recognition: Evidence from individual differences and amnesia. Cortex, 85, 182–193. https:// doi.org/10.1016/j.cortex.2016.10.007, PubMed: 27842701 Patai, E. Z., & Spiers, H. J. (2017). Human navigation: Occipital place area detects potential paths in a scene. Current Biology, 27, R599–R600. https://doi.org/10.1016/j.cub.2017.05 .012, PubMed: 28633030 Rissman, J., Gazzaley, A., & D’Esposito, M. (2004). Measuring functional connectivity during distinct stages of a cognitive task. Neuroimage, 23, 752–763. https://doi.org/10.1016/j .neuroimage.2004.06.035, PubMed: 15488425 Rolls, E. T., Robertson, R. G., & Georges-François, P. (1997). Spatial view cells in the primate hippocampus. European Journal of Neuroscience, 9, 1789–1794. https://doi.org/10 .1111/j.1460-9568.1997.tb01538.x, PubMed: 9283835 Ryals, A. J., Wang, J. X., Polnaszek, K. L., & Voss, J. L. (2015). Hippocampal contribution to implicit configuration memory expressed via eye movements during scene exploration. Hippocampus, 25, 1028–1041. https://doi.org/10.1002/hipo .22425, PubMed: 25620526 Ryan, J. D., Althoff, R. R., Whitlow, S., & Cohen, N. J. (2000). Amnesia is a deficit in relational memory. Psychological Science, 11, 454–461. https://doi.org/10.1111/1467-9280 .00288, PubMed: 11202489 Ryan, J. D., Shen, K., Kacollja, A., Tian, H., Griffiths, J., Bezgin, G., et al. (2020). Modeling the influence of the hippocampal memory system on the oculomotor system. Network Neuroscience, 4, 217–233. https://doi.org/10.1162/netn_a _00120, PubMed: 32166209 Ryan, J. D., Shen, K., & Liu, Z.-X. (2020). The intersection between the oculomotor and hippocampal memory systems: Empirical developments and clinical implications. Annals of the New York Academy of Sciences, 1464, 115–141. https:// doi.org/10.1111/nyas.14256, PubMed: 31617589 Sakon, J. J., & Suzuki, W. A. (2019). A neural signature of pattern separation in the monkey hippocampus. Proceedings of the National Academy of Sciences, U.S.A., 116, 9634–9643. https://doi.org/10.1073/pnas.1900804116, PubMed: 31010929 Scholz, A., Mehlhorn, K., & Krems, J. F. (2016). Listen up, eye movements play a role in verbal memory retrieval. Psychological Research, 80, 149–158. https://doi.org/10.1007 /s00426-014-0639-4, PubMed: 25527078 Shen, K., Bezgin, G., Selvam, R., McIntosh, A. R., & Ryan, J. D. (2016). An anatomical interface between memory and oculomotor systems. Journal of Cognitive Neuroscience, 28, 1772–1783. https://doi.org/10.1162/jocn_a_01007, PubMed: 27378328 Silson, E. H., Zeidman, P., Knapen, T., & Baker, C. I. (2021). Representation of contralateral visual space in the human hippocampus. Journal of Neuroscience, 41, 2382–2392. https://doi.org/10.1523/JNEUROSCI.1990-20.2020, PubMed: 33500275 St-Laurent, M., Abdi, H., & Buchsbaum, B. R. (2015). Distributed patterns of reactivation predict vividness of recollection. Journal of Cognitive Neuroscience, 27, 2000–2018. https:// doi.org/10.1162/jocn_a_00839, PubMed: 26102224 Tzourio-Mazoyer, N., Landeau, B., Papathanassiou, D., Crivello, F., Etard, O., Delcroix, N., et al. (2002). Automated anatomical labeling of activations in SPM using a macroscopic anatomical parcellation of the MNI MRI single-subject brain. Neuroimage, 15, 273–289. https://doi.org/10.1006/nimg.2001 .0978, PubMed: 11771995 Wagner, A. D., Schacter, D. L., Rotte, M., Koutstaal, W., Maril, A., Dale, A. M., et al. (1998). Building memories: Remembering and forgetting of verbal experiences as predicted by brain activity. Science, 281, 1188–1191. https://doi.org/10.1126 /science.281.5380.1188, PubMed: 9712582 Walther, D., & Koch, C. (2006). Modeling attention to salient proto-objects. Neural Networks, 19, 1395–1407. https://doi .org/10.1016/j.neunet.2006.10.001, PubMed: 17098563 Wang, L., Baumgartner, F., Kaule, F. R., Hanke, M., & Pollmann, S. (2019). Individual face- and house-related eye movement patterns distinctively activate FFA and PPA. Nature Wynn, Liu, and Ryan 1561 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / j / o c n a r t i c e - p d l f / / / 3 4 9 1 5 4 7 2 0 3 7 4 5 1 / / j o c n _ a _ 0 1 7 6 1 p d . f b y g u e s t t o n 0 8 S e p e m b e r 2 0 2 3 Communications, 10, 5532. https://doi.org/10.1016/j .cognition.2021.104746, PubMed: 31797874 Wantz, A. L., Martarelli, C. S., Cazzoli, D., Kalla, R., Müri, R., & Mast, F. W. (2016). Disrupting frontal eye-field activity impairs memory recall. NeuroReport, 27, 374–378. https://doi .org/10.1097/ WNR.0000000000000544, PubMed: 26901058 Willenbockel, V., Sadr, J., Fiset, D., Horne, G. O., Gosselin, F., & Tanaka, J. W. (2010). Controlling low-level image properties: The SHINE toolbox. Behavior Research Methods, 42, 671–684. https://doi.org/10.3758/BRM.42.3.671, PubMed: 20805589 Wynn, J. S., Buchsbaum, B. R., & Ryan, J. D. (2021). Encoding and retrieval eye movements mediate age differences in pattern completion. Cognition, 214, 104746. https://doi.org /10.1016/j.cognition.2021.104746, PubMed: 34034008 Wynn, J. S., Olsen, R. K., Binns, M. A., Buchsbaum, B. R., & Ryan, J. D. (2018). Fixation reinstatement supports visuospatial memory in older adults. Journal of Experimental Psychology: Human Perception and Performance, 44, 1119–1127. https:// doi.org/10.1037/xhp0000522, PubMed: 29469586 Wynn, J. S., Ryan, J. D., & Buchsbaum, B. R. (2020). Eye movements support behavioral pattern completion. Proceedings of the National Academy of Sciences, U.S.A., 53, 1689–1699. https://doi.org/10.1073/pnas.1917586117, PubMed: 32123109 Wynn, J. S., Shen, K., & Ryan, J. D. (2019). Eye movements actively reinstate spatiotemporal mnemonic content. Vision, 3, 21. https://doi.org/10.3390/vision3020021, PubMed: 31735822 Yonelinas, A. P. (2013). The hippocampus supports high- resolution binding in the service of perception, working memory and long-term memory. Behavioural Brain Research, 254, 34–44. https://doi.org/10.1016/j.bbr.2013.05 .030, PubMed: 23721964 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / j / o c n a r t i c e - p d l f / / / 3 4 9 1 5 4 7 2 0 3 7 4 5 1 / / j o c n _ a _ 0 1 7 6 1 p d . f b y g u e s t t o n 0 8 S e p e m b e r 2 0 2 3 1562 Journal of Cognitive Neuroscience Volume 34, Number 9Neural Correlates of Subsequent Memory-Related image
Neural Correlates of Subsequent Memory-Related image
Neural Correlates of Subsequent Memory-Related image
Neural Correlates of Subsequent Memory-Related image
Neural Correlates of Subsequent Memory-Related image
Neural Correlates of Subsequent Memory-Related image

Scarica il pdf