Neural Mechanisms of Perceiving and Subsequently
Recollecting Emotional Facial Expressions in
Young and Older Adults
Reina Izumika1, Roberto Cabeza2, and Takashi Tsukiura1
Abstracto
■ It is known that emotional facial expressions modulate the
perception and subsequent recollection of faces and that aging
alters these modulatory effects. Todavía, the underlying neural
mechanisms are not well understood, and they were the focus
of the current fMRI study. We scanned healthy young and older
adults while perceiving happy, neutral, or angry faces paired
with names. Participants were then provided with the names
of the faces and asked to recall the facial expression of each
rostro. fMRI analyses focused on the fusiform face area (FFA),
the posterior superior temporal sulcus (pSTS), the OFC, el
amygdala (AMY), and the hippocampus (HC). Univariate
actividad, multivariate pattern (MVPA), and functional connec-
tivity analyses were performed. The study yielded two main sets
of findings. Primero, in pSTS and AMY, univariate activity and MVPA
discrimination during the processing of facial expressions were
similar in young and older adults, whereas in FFA and OFC,
MVPA discriminated facial expressions less accurately in older
than young adults. These findings suggest that facial expression
representations in FFA and OFC reflect age-related dedifferen-
tiation and positivity effect. Segundo, HC–OFC connectivity
showed subsequent memory effects (SMEs) for happy expres-
sions in both age groups, HC–FFA connectivity exhibited SMEs
for happy and neutral expressions in young adults, and HC-
pSTS interactions displayed SMEs for happy expressions in
adultos mayores. These results could be related to compensatory
mechanisms and positivity effects in older adults. Taken
together, the results clarify the effects of aging on the neural
mechanisms in perceiving and encoding facial expressions. ■
yo
D
oh
w
norte
oh
a
d
mi
d
F
r
oh
metro
h
t
t
pag
:
/
/
d
i
r
mi
C
t
.
metro
i
t
.
mi
d
tu
/
j
/
oh
C
norte
a
r
t
i
C
mi
–
pag
d
yo
F
/
/
/
3
4
7
1
1
8
3
2
0
3
4
4
7
2
/
/
j
oh
C
norte
_
a
_
0
1
8
5
1
pag
d
.
F
b
y
gramo
tu
mi
s
t
t
oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3
INTRODUCCIÓN
The most common memory complaint in healthy older
adultos (p.ej., 83% of responders in Bolla, Lindgren,
Bonaccorsy, & Bleecker, 1991; see also Cohen & Faulkner,
1986; Zelinski, Gilewski, & Thompson, 1980) is a difficulty
in remembering people’s names. This deficit, which has
been confirmed in laboratory studies on face–name asso-
ciations ( James, Fogler, & Tauber, 2008; Naveh-Benjamin,
Guez, Kilb, & Reedy, 2004; Crook & Oeste, 1990), is not
surprising given that older adults are impaired both in
processing visual stimuli (Baltes & Lindenberger, 1997;
Lindenberger & Baltes, 1994) including face perception
(for review, see Boutet, Taler, & Collin, 2015) and in estab-
lishing new associations (verde & Naveh-Benjamin,
2020; Naveh-Benjamin, 2000). An important component
of older adults’ visual perception deficits is a reduction
in neural specificity known as the age-related dedifferenti-
ation effect (for review, see Koen & Rugg, 2019). en contra-
contraste, the associative memory deficit has been linked to
impaired hippocampal activity (Tsukiura et al., 2011;
Dennis et al., 2008) and hippocampal–cortical connec-
actividad (Ness et al., 2022; Tsukiura et al., 2011; Leshikar,
1Kyoto University, Japón, 2Universidad de Duke, Durham, CAROLINA DEL NORTE, 3Hum-
boldt University-Berlin, Alemania
Gutchess, Hebrank, suton, & Parque, 2010). In a previous
estudiar, we found that functional connectivity between the
hippocampus (HC) and the OFC during face–name asso-
ciative learning was enhanced by happy facial expressions
and that this mechanism was related to better memory
for happy faces paired with names (Tsukiura & Cabeza,
2008). Given that older adults show the age-related pos-
itivity effect, which is known as a bias toward positive
emotional stimuli or a tendency to interpret neutral
stimuli as emotionally positive stimuli (for review, ver
Mather & Carstensen, 2005), an obvious question is
whether the positivity effect could enhance HC–cortex
connectivity during the encoding of face–name associa-
ciones. If this effect is also associated with better face–name
aprendiendo, it would be an example of functional compensa-
tion in older adults. Abajo, we briefly describe the dedif-
ferentiation and positivity effects, and their implications
for the present study.
The age-related dedifferentiation effect in visual percep-
tion refers to the finding that the neural representations
for different visual stimuli are less distinct in older than
young adults (Park et al., 2004; for review, see Koen &
Rugg, 2019). This effect has been demonstrated for a
variety of tasks and stimuli (Deng et al., 2021; Colina, Rey,
& Rugg, 2021; Saverino et al., 2016; Dennis & Cabeza,
2011; Kalkstein, Checksfield, Bollinger, & Gazzaley,
© 2022 Instituto de Tecnología de Massachusetts. Published under a
Creative Commons Attribution 4.0 Internacional (CC POR 4.0) licencia.
Revista de neurociencia cognitiva 34:7, páginas. 1183–1204
https://doi.org/10.1162/jocn_a_01851
2011; St-Laurent, Abdi, Burianova, & grady, 2011; Parque,
Carp, Hebrank, Parque, & Polk, 2010; Payer et al., 2006),
including faces (Goh, suzuki, & Parque, 2010). Age-related
dedifferentiation has been traditionally examined using
univariate analyses and, more recently, using multivariate
representational analyses, such as multivariate pattern
análisis (MVPA) (Katsumi, Andreano, Barrett, Dickerson,
& Touroutoglou, 2021; Hill et al., 2021; Dennis et al.,
2019). MVPA is used to measure the discriminability
between different stimuli (p.ej., faces vs. objects in the
work of Haxby et al., 2001), different exemplars of the
same class (p.ej., different facial identities in the work of
Ghuman et al., 2014), or different qualities across stimuli
of the same class (p.ej., different facial expressions in the
work of Wegrzyn et al., 2015; Harry, williams, davis, &
kim, 2013). En el presente estudio, we focused on the
age-related dedifferentiation in perceiving different facial
expresiones. Older adults do not discriminate facial
expressions compared with young adults, and this deficit
has been interpreted as evidence of the age-related
dedifferentiation (franklin & Zebrowitz, 2017). Two
potential regions reflecting the age-related dedifferentia-
tion for facial expressions are the fusiform face area
(FFA), which is sensitive to the processing of facial
expresiones ( Wegrzyn et al., 2015; Skerry & sajonia, 2014;
Harry et al., 2013), and OFC, which is involved in the
processing of socioemotional signals, including facial
expresiones (Goodkind et al., 2012; watson & Platón,
2012; Heberlein, Padon, Gillihan, Farah, & Fellows,
2008; Hornak et al., 2003; Hornak, Rolls, & Wade,
1996). Both FFA and OFC are affected by age-related
atrophy and dedifferentiation (Katsumi et al., 2021; Xie
et al., 2021; Shen et al., 2013; Sotavento, grady, Habak, wilson,
& Moscovitch, 2011; Goh et al., 2010; Fjell et al., 2009;
Salat et al., 2009; Park et al., 2004) and hence are likely
to show age-related dedifferentiation for facial expressions
in the present study.
The age-related positivity effect refers to the finding
that older adults often show a bias toward positive stimuli
and interpret ambiguous socioemotional stimuli as more
positive than young adults (for review, see Mather &
Carstensen, 2005). In behavioral studies, the positivity
effect has been found for a variety of emotional stimuli
(Huan, Liu, Lei, & Yu, 2020; Gallo, Korthauer, McDonough,
Teshale, & Johnson, 2011; van Reekum et al., 2011;
Comblain, D’Argembeau, & van der Linden, 2005), incluido
faces (Zebrowitz, Boshyan, Ward, Gutchess, & Hadjikhani,
2017; Riediger, Voelkle, Ebner, & Lindenberger, 2011;
Leigland, Schulz, & Janowsky, 2004). In fMRI studies, el
memory-related positivity effect in older adults has been
linked to age-related changes in functional connectivity
for emotional pictures (Addis, Leclerc, Muscatell, &
Kensinger, 2010; St Jacques, Dolcos, & Cabeza, 2009)
and to an age-related increase in functional connectivity
and memory performance for emotionally positive pic-
turas (Addis et al., 2010). These changes could be attrib-
uted to functional compensation, which refers to the
cognition-enhance recruitment of neural resources (para
revisar, see Cabeza et al., 2018). In our prior fMRI study
of memory for face–name associations, Encontramos eso
happy facial expressions boosted functional connectivity
between HC and OFC to a greater extent for subsequently
remembered than forgotten stimuli (Tsukiura & Cabeza,
2008). De este modo, in the present study, we were interested in
(1) whether we would find an age-related increase in func-
tional connectivity for happy faces between HC and OFC
or other regions related to processing facial expressions
such as the posterior superior temporal sulcus (pSTS)
(Wegrzyn et al., 2015; Said, moore, Engell, todorov, &
Haxby, 2010) or FFA (Wegrzyn et al., 2015; Skerry & sajonia,
2014; Harry et al., 2013) y (2) whether this effect would
be associated with subsequent memory (for review, ver
Paller & Wagner, 2002), suggesting the age-related com-
pensation in memory.
In the present event-related fMRI study, Participantes
were scanned while viewing happy, neutral, or angry faces
paired with names, and memory for the facial expressions
was assessed by presenting with the names as cues and
asking participants to recall the facial expression of each
face associated with the cued name (ver figura 1). We per-
formed traditional univariate analyses, but our focus was
the dedifferentiation effect measured with MVPA and the
positivity effect measured with functional connectivity
analiza. To investigate the dedifferentiation in perceiving
facial expressions, an MVPA classifier was trained to distin-
guish between happy, neutral, and angry expressions, y
was then used to assess the discriminability among these
expressions during face perception. If the MVPA classifiers
do not distinguish these facial expressions in older adults,
it would reflect the age-related dedifferentiation in per-
ceiving facial expressions. As noted above, our candidate
regions reflecting the age-related dedifferentiation for
facial expressions were FFA and OFC (Katsumi et al.,
2021; Xie et al., 2021; Lee et al., 2011; Goh et al., 2010; Parque
et al., 2004), which are regions that are involved in facial
expresiones ( Wegrzyn et al., 2015; Skerry & sajonia, 2014;
Harry et al., 2013; watson & Platón, 2012; Heberlein et al.,
2008; Hornak et al., 2003; Hornak et al., 1996) y eso
show atrophy in older adults (Shen et al., 2013; Fjell
et al., 2009; Salat et al., 2009). Además, MVPA in the
present fMRI study also investigated neural specificity for
facial expressions in the amygdala (AMY) related to the
perception of highly arousing facial expressions ( Cual
et al., 2002; Breiter et al., 1996) and pSTS related to the
processing of face-based social signals, including facial
expressions and eye movements ( Wegrzyn et al., 2015;
Said et al., 2010; Chip, alison, Bentín, Sangre, & McCarthy,
1998). To investigate the age-related positivity effect, nosotros
performed functional connectivity analyses for subse-
quently remembered and forgotten. As explained above,
we focused on investigating whether HC–cortex
conectividad funcional, which has been shown in the
age-related positivity effect (Addis et al., 2010), is asso-
ciated with successful memory in older adults. En ese caso,
1184
Revista de neurociencia cognitiva
Volumen 34, Número 7
yo
D
oh
w
norte
oh
a
d
mi
d
F
r
oh
metro
h
t
t
pag
:
/
/
d
i
r
mi
C
t
.
metro
i
t
.
mi
d
tu
/
j
/
oh
C
norte
a
r
t
i
C
mi
–
pag
d
yo
F
/
/
/
3
4
7
1
1
8
3
2
0
3
4
4
7
2
/
/
j
oh
C
norte
_
a
_
0
1
8
5
1
pag
d
.
F
b
y
gramo
tu
mi
s
t
t
oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3
yo
D
oh
w
norte
oh
a
d
mi
d
F
r
oh
metro
h
t
t
pag
:
/
/
d
i
r
mi
C
t
.
metro
i
t
.
mi
d
tu
/
j
/
oh
C
norte
a
r
t
i
C
mi
–
pag
d
yo
F
/
/
/
3
4
7
1
1
8
3
2
0
3
4
4
7
2
/
/
j
oh
C
norte
_
a
_
0
1
8
5
1
pag
d
.
F
b
y
gramo
tu
mi
s
t
t
oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3
Cifra 1. Example of the encoding and retrieval trials. (A) Example of the encoding trials. (B) Examples of the retrieval trials. Facial stimuli in this
figure were collected from the royalty-free database (https://www.photo-ac.com/) for illustration purposes only. All verbal items were presented in
Japanese. English is used here for illustration purposes only.
such effect would be consistent with the age-related
compensación.
MÉTODOS
Participantes
en este estudio, we scanned 36 young (16 women) y 36
older (18 women) adults and paid them for their participa-
tion in the fMRI experiment. All participants were right-
handed, native Japanese speakers, with no history of
neurological or psychiatric disorders. Their vision was
normal or corrected to normal with glasses. All young
participants were recruited from the Kyoto University
comunidad, and all older participants were recruited from
the Kyoto City Silver Human Resource Center. All partici-
pants provided written informed consent for a protocol
approved by the institutional review board of Graduate
School of Human and Environmental Studies, Kioto
Universidad (19-H-10). A priori power analysis for sample
size was conducted on a design of repeated-measures
ANOVA with an interaction of between-subjects factor of
Age Group (Young and Old) and within-subject factor of
Facial Expression (Happy, Neutral, and Angry). In this anal-
ysis, we employed G*Power Version 3.1 (Faul, Erdfelder,
Lang, & Buchner, 2007), which estimated a total sample
number of 56 (28 young and 28 adultos mayores) on parame-
ters of small-to-medium effect size ( f = 0.2), error proba-
habilidad (un = .05), and power (0.90). The estimated sample
size is supported by a similar fMRI study investigating
effects of aging and facial expressions on neural mecha-
nisms during the processing of faces (Ebner, Johnson, &
pescador, 2012). To retain sufficient power in the case of
missing data by poor performance, large head motion,
Etcétera, we recruited 36 young and 36 adultos mayores
in the present study.
All participants performed several neuropsychological
pruebas, including the Japanese version of the Flinders Hand-
edness Survey (FLANDERS) (Okubo, suzuki, & Nicholls,
2014; Nicholls, tomás, Loetscher, & Grimshaw, 2013),
the Japanese version of the Montreal Cognitive Assess-
mento (MoCA-J; Fujiwara et al., 2010; Nasreddine et al.,
2005), and the Center for Epidemiologic Studies Depres-
sion scale (CES-D; Shima, 1985; Radloff, 1977). One young
and two older participants showed head movement larger
than 1.5 voxels in two or more fMRI runs. Además, uno
older participant misunderstood the experimental proce-
dures of the encoding task, one older participant felt sick
in the MRI scanner, and one young and one older partici-
pant showed possible pathological changes (probable
arachnoid cyst) in their structural MRIs. In neuropsycho-
logical tests, the MoCA-J score in one young participant
was lower than 2 SD of the mean scores in a group of
young participants. Regarding the CES-D score, two young
participants and one older participant showed worse
scores than 2 SD of the mean scores in each group of
young and older participants. In the behavioral perfor-
mance of the fMRI task, four young and two older
Izumika, Cabeza, and Tsukiura
1185
participants had fewer than three trials in either experi-
mental condition of fMRI analyses. Según estos
exclusion criteria, behavioral and MRI data from nine
young participants and eight older participants were
excluded from all analyses. De este modo, the analyses were based
on data from 27 young (12 women; edad media = 21.19
[DE = 1.62] años) y 28 older (14 women; edad media =
67.36 [DE = 2.57] años) adultos.
Age, education year, FLANDERS score, MoCA-J score,
and CES-D score data for each participant were compared
by two-sample t tests (two-tailed) between age groups of
young and older adults. A significant difference between
the two groups was identified in age, t(53) = 79.38, pag <
.001, d = 21.41, and the MoCA-J score, t(53) = 4.75, p <
.001, d = 1.28. However, we did not find significant differ-
ences in years of education, t(53) = 0.60, p = .55, d = 0.16;
the FLANDERS score, t(53) = 1.31, p = .20, d = 0.35; or
the CES-D score, t(53) = 1.13, p = .26, d = 0.31. Detailed
profiles in young and older adults whose data were
analyzed are summarized in Table 1.
Stimuli
The stimuli were colored face pictures of 120 unfamiliar
persons (60 female and 60 male faces) selected from an
in-house database, and each face included happy, neutral,
and angry facial expressions. This database contained faces
from voluntary pedestrians aged in their thirties and
forties in the downtown area of Kyoto city who were asked
to pose making happy, angry, and neutral face expres-
sions. All pictures were taken against a gray background,
and the eyes of each face were directed to the front. Easily
identifiable visual features of each picture, such as blem-
ishes, freckles, moles, scars, and ornaments, were
removed (Sugimoto, Dolcos, & Tsukiura, 2021), and the
color of the clothes in each picture was converted into a
uniform black color using an image processing software
(Adobe Photoshop CS 5.1). The resolution of all pictures
was resized to 280 × 350 pixels. These pictures of 120 per-
sons with three facial expressions each (for 360 pictures)
were divided into three lists of 40 persons each, among
Table 1. Participant Characteristics
which age and sex were controlled to be equal. Using data
from 24 healthy younger adults in a previous study
(Sugimoto et al., 2021), emotional arousal and valence in
happy, neutral, and angry expressions were controlled to
be equal across the lists. The scores of arousal and valence
were statistically compared among the lists in each facial
expression by one-way ANOVAs. The ANOVA for arousal
scores in each facial expression showed no significant
difference among the lists [happy: F(2,117) = 0.02, p =
.98, η2 = .00; neutral: F(2, 117) = 0.15, p = .86, η2 =
.00; angry: F(2, 117) = 0.003, p = .997, η2 = .00]. In the
ANOVA for valence scores, we did not find a significant dif-
ference among the lists in each facial expression [happy:
F(2, 117) = 0.06, p = .94, η2 = .00; neutral: F(2, 117) =
0.24, p = .79, η2 = .00; angry: F(2, 117) = 0.32, p = .73,
η2 = .01]. Each list was assigned to the condition of
either Happy, Neutral, or Angry for target faces to be
encoded, and the assignment was counterbalanced
across participants.
A set of Japanese family names was also employed in this
study. A total of the top 160 popular Japanese family
names, which were written by two-letter Japanese kanji
that could have different pronunciations, were collected
from an on-line database (myoji-yurai.net/prefectureRanking
.htm). These 160 names were divided into four lists with-
out popularity bias. One hundred twenty names in three
lists were randomly paired with 120 target faces each, and
40 names in one list were used as distracters in the
retrieval phase.
Experimental Procedures
fMRI runs included a memory task regarding the encoding
and retrieval of face–name pairs and a functional localizer
task. Encoding and retrieval runs of the memory task
alternated across eight runs, with each retrieval run testing
face–name pairs encoded in the previous encoding run.
Each set of the four encoding-retrieval runs used different
lists of face–name pairs, and there was approximately a
1-min interval between encoding and retrieval runs in
each set. After exiting from the scanner, they evaluated
Age, years
Sex, male: female
Education, years
FLANDERS
MoCA-J
CES-D
Young (SD)
21.19 (1.62)
15:12
14.07 (1.24)
9.41 (1.37)
28.52 (0.85)
8.19 (4.46)
Old (SD)
67.36 (2.57)
14:14
14.32 (1.79)
9.79 (0.69)
26.36 (2.22)
9.75 (5.72)
Two-sample t test
Young < old***
n.s.
n.s.
Young > old***
n.s.
FLANDERS = Japanese version of the Flinder Handedness Survey; MoCA-J = Japanese version of the Montreal Cognitive Assessment; CES-D = the
Center for Epidemiologic Studies Depression scale; n.s. = not significant.
*** pag < .001.
1186
Journal of Cognitive Neuroscience
Volume 34, Number 7
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
/
j
/
o
c
n
a
r
t
i
c
e
-
p
d
l
f
/
/
/
3
4
7
1
1
8
3
2
0
3
4
4
7
2
/
/
j
o
c
n
_
a
_
0
1
8
5
1
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
the faces in emotional arousal and valence. Stimulus pre-
sentations and recording of participants’ responses in all
tasks were controlled by MATLAB scripts (www.mathworks
.com). All participants were fully trained on encoding
and retrieval procedures before the experiment.
Memory Task
Figure 1 illustrates encoding and retrieval trials in a mem-
ory task of face–name pairs. During both encoding and
retrieval, each stimulus was presented for 3500 msec and
was followed by a jittered (2500–7500 msec) visual fixation
as ISI. During each encoding run, participants were ran-
domly presented with 30 face–name pairs one by one.
For each pair, they were instructed to learn each pair by
reading the name silently and pressing a key to indicate
the expression of the face (“Happy,” “Neutral,” or
“Angry”). During each retrieval run, participants were pre-
sented with 30 names of the face–name pairs encoded in
the previous run mixed with 10 new names in random
order. For each name, participants were told that if they
believed the name was not paired with a face in the previ-
ous encoding run, they should press “New.” If they
believed the name was paired with a face in the previous
encoding run, they should indicate the expression of the
face by pressing “Happy,” “Neutral,” or “Angry.” If they
believed the name was paired with a face in the previous
encoding run but could not remember the expression,
they should press “Unknown.” They were asked to make
responses during encoding and during retrieval as quickly
as possible.
In the present study, we focused on the analyses of fMRI
data only from the encoding runs. Trials that showed no
response in either the encoding or retrieval run and that
facial expressions were erroneously judged in the encod-
ing runs were excluded from all analyses. In trials in which
learned names were presented, trials in which facial
expressions associated with the names were successfully
recalled were defined as Hit; trials in which facial expres-
sions associated with the names were erroneously recalled
or were categorized as “Unknown,” or in which the names
were judged as “New” were defined as Miss. The Hit and
Miss trials were subdivided into the three facial expres-
sions, Happy, Neutral, and Angry, in which each facial
expression was presented during encoding.
Functional Localizer Task
After completing the memory task of face–name associa-
tions, participants performed a run of the functional
localizer task (Matsuda et al., 2013), in which movies of
emotional facial expressions were presented. The ratio-
nale of using movies of facial expressions was that dynamic
facial expressions had produced greater activation in the
face-related regions than static images (Foley, Rippon,
Thai, Longe, & Senior, 2012; Fox, Iaria, & Barton, 2009;
Sato, Kochiyama, Yoshikawa, Naito, & Matsumura, 2004;
LaBar, Crupain, Voyvodic, & McCarthy, 2003). In addition,
the functional localizer task enabled us to identify brain
regions reflecting the common processing of multiple
facial expressions rather than the processing of a selective
facial expression.
In this task, participants were presented with 2-sec
movies of male and female faces, in which a neutral facial
expression was changed to either an emotional facial
expression of joy, fear, anger, or disgust, or with 2-sec
movies in which the original movies of male and female
faces were transformed into mosaic forms for the control.
Thus, we prepared 16 movies, including 8 original and 8
control movies. In addition, we prepared another version
of the 2-sec original and control movies, into which the
momentary presentation (100 msec) of building pictures
was inserted around every three trials at a random time.
Participants were required to press the corresponding
button as fast as possible when they noticed the building
pictures. These movies with the momentary presentation
of building pictures included four original and four control
stimuli. All of these movies were randomly presented one
by one for 2000 msec each, and a visual fixation was shown
as ISI, jittered with variable durations (2500–5500 msec).
This task included 120 trials, in which 24 movie stimuli
were repeated 5 times.
Evaluation Task
After scanning, participants rated the emotional arousal
and valence elicited by the encoded faces. In one run,
the 120 encoded faces were presented and rated in emo-
tional arousal (1 = calm, 9 = exciting), and in another run,
the same faces were presented and rated in emotional
valence (1 = unpleasant, 9 = pleasant). The faces were
presented in random order, each for 2000 msec for young
adults and for 3000 msec for older adults with a 1000-msec
ISI. The order of the two rating runs was counterbalanced
across participants.
MRI Data Acquisition
All MRI data were acquired by a MAGNETOM Verio 3-T MRI
scanner (Siemens), which is located at the Kokoro
Research Center, Kyoto University. Stimuli were visually
presented on an MRI-compatible display (Nordic Neuro
Lab, Inc.), and participants viewed the stimuli through a
mirror attached to the head coil of the MRI scanner. Behav-
ioral responses were recorded by a five-button fiber optic
response pad (Current Designs, Inc.), which was assigned
to the right hand. Head motion in the scanner was mini-
mized by a neck supporter and foam pads, and scanner
noise was reduced by ear plugs. First, three directional
T1-weighted structural images were acquired to localize
the subsequent functional and high-resolution anatomical
images. Second, functional images were recorded using a
pulse sequence of gradient-echo EPI, which is sensitive to
Izumika, Cabeza, and Tsukiura
1187
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
/
j
/
o
c
n
a
r
t
i
c
e
-
p
d
l
f
/
/
/
3
4
7
1
1
8
3
2
0
3
4
4
7
2
/
/
j
o
c
n
_
a
_
0
1
8
5
1
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
blood oxygenation level-dependent contrast (repetition
time = 1500 msec, flip angle = 60°, echo time = 38.8 msec,
field of view = 22.0 cm × 22.0 cm, matrix size = 100 ×
100, 68 horizontal slices, slice thickness/gap = 2.2/0 mm,
multiband factor = 4). Finally, high-resolution T1-
weighted structural images were obtained using MPRAGE
(repetition time = 2250 msec, echo time = 3.51 msec, field
of view = 25.6 cm, matrix size = 256 × 256, 208 horizontal
slices, slice thickness/gap = 1.0/0 mm).
fMRI Data Analysis
Preprocessing
All MRI data were preprocessed by Statistical Parametric
Mapping 12 (SPM12: www.fil.ion.ucl.ac.uk/spm/software
/spm12/) implemented in MATLAB (www.mathworks
.com). In the preprocessing, fMRI data from the memory
and functional localizer tasks were analyzed separately.
First, the initial six volumes of functional images in each
run were discarded to prevent an initial dip. Second, six
parameters of head motion were extracted from a series
of the remaining functional images. Third, a high-
resolution structural image was coregistered to the first
volume of the functional images. Fourth, during the spatial
normalization process, we estimated parameters to fit
anatomical space of the structural image to the Tissue
Probability Map in the Montreal Neurological Institute
(MNI) template, and the parameters were written to all
functional images (resampled resolution = 2.2 mm ×
2.2 mm × 2.2 mm). Finally, these normalized functional
images were spatially smoothed by a Gaussian kernel of
FWHM = 5 mm. These functional images after all the pre-
processing steps were applied to the univariate analyses
in the memory task and functional localizer task and to
the functional connectivity analysis in the memory task.
In MVPA of the memory task, functional images without
spatial smoothing were analyzed.
Univariate Analysis in the Functional Localizer Task
and ROI Definition
Functional images in the functional localizer task were sta-
tistically analyzed to define ROIs related to the processing
of faces and facial expressions. Statistical analyses were
performed in SPM12 at the individual level and then at
the group level. In the individual-level (fixed-effect) analy-
sis, trial-related activation was modeled by convolving a
vector of onsets with a canonical hemodynamic response
function (HRF) in the context of the general linear model
(GLM), in which the timing of stimulus presentation was
defined as the onset with an event duration of 0 sec. This
model included nine regressors reflecting four conditions
related to the original movies of each facial expression
(Happy, Fear, Angry, and Disgust), four control conditions
related to the mosaic movies transformed from the origi-
nal movies (Happy-Mosaic, Fear-Mosaic, Angry-Mosaic,
and Disgust-Mosaic), and one dummy condition in which
a building picture was inserted into the original and con-
trol movies. Six parameters related to head motion were
also included in this model as confounding factors. Activa-
tion related to the processing of faces and facial expres-
sions was identified by comparing all conditions of the
original movies (Happy, Fear, Angry, and Disgust) with
all control conditions of the mosaic movies (Happy-
Mosaic, Fear-Mosaic, Angry-Mosaic, and Disgust-Mosaic),
and the contrast yielded a t statistic in each voxel. A con-
trast image was created for each participant.
In the group-level (random-effect) analyses, contrast
images produced by the individual-level analysis were ana-
lyzed by a one-sample t test for all participants in both age
groups. This test produced an activation map reflecting
greater activation during the general processing of faces
and facial expressions than during simple visual process-
ing. In the whole-brain analysis, the height threshold at
the voxel level ( p < .001) was corrected for whole-brain
multiple comparisons by the family-wise error (FWE) rate
( p < .05) with a minimum cluster size of 10 voxels.
Table 2 summarizes results in the functional localizer
task. Significant activation was identified in one cluster,
which included right pSTS, right FFA, and right occipital
face area (OFA), and in each cluster of left pSTS, left
FFA, left OFA, and bilateral AMY. These regions were
applied to ROI masks in the univariate analysis of the
memory task. In addition, the significant activation cluster
was combined with each anatomical mask to define the
right pSTS, right FFA, and bilateral AMY ROIs, which were
used for MVPA (see Figure 4). The pSTS ROI was defined
as a cluster reflecting significant activation in a region
removing anterior temporal lobe, which was reported in
a previous study (Binney, Embleton, Jefferies, Parker, &
Ralph, 2010), from the right superior temporal gyrus and
middle temporal gyrus of the automated anatomical
labeling (AAL) ROI package. A cluster showing significant
activation in the right fusiform gyrus of the AAL ROI
package (Tzourio-Mazoyer et al., 2002) was defined
as the right FFA ROI. A cluster showing significant activa-
tion in the bilateral AMY extracted from a previous
study (Amunts et al., 2005) was defined as the bilateral
AMY ROI.
The ROI mask in bilateral OFC was defined anatomi-
cally, which included bilateral regions in the superior,
middle, inferior, and medial orbitofrontal gyri defined by
the AAL ROI package. This OFC ROI was used in the uni-
variate analysis and MVPA. To determine seed VOIs in the
functional connectivity analyses, significant voxels fulfilling
the height threshold ( p < .001) were corrected for multi-
ple comparisons in each region of bilateral OFC, posterior
parts of the right superior and middle temporal gyri, and
the right fusiform gyri, which were created by the AAL ROI
package mentioned above. Significant activation was
found in each region, in which peak voxels in left OFC
(x = −41, y = 29, z = −6), right pSTS (x = 45, y =
−66, z = 0), and right FFA (x = 41, y = −48, z = −19)
1188
Journal of Cognitive Neuroscience
Volume 34, Number 7
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
/
j
/
o
c
n
a
r
t
i
c
e
-
p
d
l
f
/
/
/
3
4
7
1
1
8
3
2
0
3
4
4
7
2
/
/
j
o
c
n
_
a
_
0
1
8
5
1
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
Table 2. Regions Showing Significant Activation in Functional Localizer Task
MNI coordinates
Regions
L/R
BA
x
y
z
Z Value
k
Whole-brain analysis
Middle temporal gyrus (pSTS)a
Fusiform gyrus (FFA)a
Middle/inferior occipital gyrus (OFA)a
Superior/middle temporal gyrus (pSTS)a
Fusiform gyrus (FFA)a
Middle/inferior occipital gyrus (OFA)a
AMY a
AMY a
ROI-based analysis (OFC)
Inferior orbitofrontal gyrusb
L
L
L
R
L
R
L
21/22/37
37
19
19/21/22/37/42
−54
−41
−45
45
−21
21
−64
−57
−79
−75
−6
−9
7
−22
−6
−8
−17
−15
5.74
6.88
6.03
Inf
7.69
Inf
116
64
74
1110
102
71
47
−41
29
−6
4.15
3
ROI-based analysis (posterior parts of the right superior and middle temporal gyri)
Superior/middle temporal gyrus (pSTS)b
ROI-based analysis (fusiform gyrus)
Fusiform gyrus (FFA)b
R
R
21/22/37/41
45
−66
0
7.08
1227
19/37
41
−48
−19
Inf
171
BA = Brodmann area; k = cluster size; L = left; R = right.
a Cluster used as ROI in MVPA after masking it with the corresponding anatomical ROI.
b MNI coordinate used for the center of a seed VOI in the functional connectivity analysis.
were employed as center voxels of each seed VOI for the
functional connectivity analysis.
Univariate Analysis
In the present study, we focused on the statistical analysis
of fMRI data only from four runs during the encoding
phase in the memory task. Retrieval-related activity will
be analyzed and reported elsewhere in the future. In
one young adult and one older adult who showed head
movements larger than 1.5 voxels during either run of
the encoding phase, fMRI data from the remaining three
encoding runs were used in the univariate analysis, MVPA,
and functional connectivity analysis.
In the univariate analysis of the memory task, using
SPM12, functional images were analyzed at the individual
level and then at the group level. In the individual-level
(fixed-effect) analysis, we modeled trial-related activation
by convolving onset vectors with a canonical HRF in the
context of the GLM. The onset timing, when face–name
associations were presented, was defined as an event with
a duration of 0 sec. Regressors in this model included
three facial expressions (Happy, Neutral, and Angry) and
one no-response (NR) condition, which was defined as
encoding trials in which participants showed no response
in the encoding and/or retrieval phases and exhibited
failure in judging facial expressions in the encoding phase.
Six parameters related to head motion were also included
in this model as confounding factors. Activation reflecting
the processing of each facial expression (Happy, Neutral,
and Angry) was computed by comparison with baseline
activation by one-sample t tests, and the contrast yielded
a t statistic in each voxel. The three contrast images in each
facial expression (Happy, Neutral, and Angry) were cre-
ated for each participant.
At the group-level (random-effect) analyses, the three
contrast images (Happy, Neutral, and Angry) obtained by
the individual-level analysis were analyzed with a two-
way mixed ANOVA with factors of Age Group (Young and
Old) and Facial Expression (Happy, Neutral, and Angry),
which was modeled by a flexible factorial design with a
subject factor. Three types of analysis were performed.
First, to identify regions associated with individual facial
expressions, the main effect of facial expression (F test)
Izumika, Cabeza, and Tsukiura
1189
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
/
j
/
o
c
n
a
r
t
i
c
e
-
p
d
l
f
/
/
/
3
4
7
1
1
8
3
2
0
3
4
4
7
2
/
/
j
o
c
n
_
a
_
0
1
8
5
1
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
was inclusively masked with pairs of t contrasts: (a) For
happy expressions, the contrasts were Happy > Neutral
and Happy > Angry ( pag < .05); (b) for angry expressions,
they were Angry > Neutral and Angry > Happy ( pag < .05);
and (c) for both happy and angry expressions (i.e., arous-
ing expressions), the contrasts were Happy > nuevo-
tral and Angry > Neutral ( pag < .05). Second, to identify
age-related decreases in activity, the main effect of age
group (F test) was inclusively masked with the t contrast
of Young > Old ( pag < .05). Finally, to investigate differen-
tial effects of facial expressions in young and older adults,
the interaction of Age Group by Facial Expression (F test)
was masked inclusively by two types of t contrast: (a)
[(Happy > Neutral in Young) > (Happy > Neutral in
Old)] y [(Happy > Angry in Young) > (Happy > Angry
in Old)] ( pag < .05); and (b) [(Angry > Neutral in Young) >
(Angry > Neutral in Old)] y [(Angry > Happy in Young) >
(Angry > Happy in Old)] ( pag < .05).
In the foregoing analyses, the height threshold at the
voxel level ( p < .001) was corrected for multiple compar-
isons in the hypothesis-driven ROI (FWE, p < .05) with a
minimum cluster size of two voxels. ROI in the univariate
analyses of the memory task was created by combining
regions identified in the functional localizer task with
bilateral OFC defined anatomically in the AAL ROI pack-
age (Tzourio-Mazoyer et al., 2002). Anatomical sites show-
ing significant activation were primarily defined by the SPM
Anatomy toolbox (Eickhoff et al., 2005, 2007; Eickhoff,
Heim, Zilles, & Amunts, 2006) and MRIcro (www.cabi
.gatech.edu/mricro/mricro).
MVPA
MVPA was performed by Pattern Recognition of Neuroim-
aging Toolbox (PRoNTo; Schrouff et al., 2013) Version 2.1,
which was implemented in MATLAB (www.mathworks
.com). In this analysis, we investigated how facial expres-
sions were represented by activity patterns in ROIs related
to the processing of faces and facial expressions and how
the neural representation was different between young
and older adults. The MVPA was conducted to examine
activity patterns in OFC, pSTS, FFA, and AMY ROIs. Given
that right pSTS and FFA regions are more dominant than
the left regions in the processing of faces (Ishai, Schmidt, &
Boesiger, 2005; Puce et al., 1998; Kanwisher, McDermott, &
Chun, 1997), ROIs in these regions were defined only in the
right hemisphere. The OFC and AMY ROIs were defined
bilaterally. Details of these ROIs were mentioned above.
Before MVPA, activation in individual trials was estimated
by a new GLM in each participant (Rissman, Gazzaley, &
D’Esposito, 2004). In this model, activation in each trial
was modeled by convolving a vector of onsets with a canon-
ical HRF in the context of the GLM, in which the trial onset
was set at the timing when each stimulus was presented
with a duration of 0 sec. Six parameters reflecting head
motion were also included in this model as a confounding
factor. This model produced trial-by-trial beta estimates for
the whole brain in each participant, and beta images for
individual trials in each participant were applied to the
pattern classification model created by PRoNTo.
In MVPA by PRoNTo, first, a whole-brain mask image in
which voxels without beta values were excluded was cre-
ated for each participant, and the pattern classification by
PRoNTo was statistically analyzed in the whole-brain mask
image. The features were extracted in each ROI and were
centered by the mean of training data for each voxel.
Three patterns of binary classification (Happy vs. Neutral,
Happy vs. Angry, and Happy vs. Angry) were conducted by
support vector machine classifiers with a linear kernel in all
voxels of each ROI. Training and testing followed a leave-
one-run-out cross-validation procedure with three runs for
training data and one run for testing data. Mean balanced
accuracy (BA) was computed for all ROIs in each partici-
pant, and the mean BA values for each ROI were tested
by permutation tests. In the permutation tests, pattern
classification analyses were repeated 1000 times on data
where labels of the two classes were randomly swapped.
This manipulation produced a null distribution that
simulated potential BA scores, in which the two classes
of facial expressions were not represented by activity pat-
terns in each ROI. This procedure has been validated in
other studies (Etzel, 2017; Haynes, 2015). These results
were corrected by the false discovery rate (FDR; q < .05)
to control false-positives (Benjamini & Hochberg, 1995).
In addition, we confirmed the BA values by one-sample
t tests (one-tailed) for chance level (50%) in each age
group; these values have been conventionally employed
in functional neuroimaging studies.
Functional Connectivity Analysis
To investigate how functional connectivity related to
memory for facial expressions was affected by aging, we
analyzed the functional connectivity of HC, which is
related to association memory (for review, see Diana,
Yonelinas, & Ranganath, 2007; Eichenbaum, Yonelinas,
& Ranganath, 2007; Davachi, 2006), with left OFC, right
pSTS, and FFA as seed regions in each age group. These
seeds were decided by results in the functional localizer
task, in which regions related to the processing of faces
and facial expressions were identified. In the functional
connectivity analysis, we employed a generalized form of
context-dependent psychophysiological interaction (gPPI;
McLaren, Ries, Xu, & Johnson, 2012). Before preparing the
gPPI analysis, four encoding runs were collapsed into one
run, and trial-related regressors of six conditions, which
were decided by facial expression (Happy, Neutral, and
Angry) and subsequent memory performance during
retrieval (Hit and Miss), were remodeled by convolving
onset vectors with a canonical HRF in the context of
the GLM. The onset timing, when each stimulus was
presented, was set as an event with a duration of 0 sec.
The NR condition was also applied to this model as a
regressor. Six parameters reflecting head motion in each
1190
Journal of Cognitive Neuroscience
Volume 34, Number 7
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
/
j
/
o
c
n
a
r
t
i
c
e
-
p
d
l
f
/
/
/
3
4
7
1
1
8
3
2
0
3
4
4
7
2
/
/
j
o
c
n
_
a
_
0
1
8
5
1
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
participant were included in this model as confounding
factors.
Regions showing significant activation in the ROI analy-
sis of the functional localizer task were defined as seed
regions. Seed regions in OFC, right pSTS, and right FFA
were set as a VOI sphere with a 6-mm radius at the center
of the peak voxel in the functional localizer task. However,
the seed VOIs in the left OFC were not significantly
extracted from data of one young adult and one older
adult. Thus, the functional connectivity analysis of the left
OFC seed was conducted with fMRI data from 26 young
and 27 older adults.
The functional connectivity analysis was performed with
the gPPI toolbox (www.nitrc.org/projects/gppi), by which
a model at the individual level was created. The model
included a design matrix with three columns of (1)
condition-related regressors formed by convolving vectors
of condition-related onsets with a canonical HRF, (2)
time series BOLD signals extracted from the seed region,
and (3) PPI regressors as the interaction between (1) and
(2). In the present study, the gPPI toolbox produced a
model including the PPI and condition-related regressors
of six experimental conditions (Happy-Hit, Happy-Miss,
Neutral-Hit, Neutral-Miss, Angry-Hit, and Angry-Miss) and
the NR condition, as well as the BOLD signals in each seed.
In addition, six regressors related to head motion were
included in this model as confounding factors. Parameters
in this model were estimated in each participant. Linear
contrasts were computed in the model for each seed
region, and regions showing a significant effect in the
PPI regressor contrasts were considered to be functionally
connected with each seed region at the statistical thresh-
old. Contrast images of the PPI regressors reflecting func-
tional connectivity during successful and unsuccessful
encoding in three facial expressions (Happy-Hit, Happy-
Miss, Neutral-Hit, Neutral-Miss, Angry-Hit, and Angry-Miss)
were obtained for each participant. In addition, the PPI
regressor contrasts were computed by comparing success-
ful with unsuccessful encoding in each facial expression
(Happy-Hit > Happy-Miss, Neutral-Hit > Neutral-Miss,
and Angry-Hit > Angry-Miss) and by comparing between
facial expressions in the Hit trials (Happy-Hit > Neutral-Hit,
Happy-Hit > Angry-Hit, Neutral-Hit > Happy-Hit, Neutral-
Hit > Angry-Hit, Angry-Hit > Happy-Hit, and Angry-Hit >
Neutral-Hit). These contrast images were used in the
group-level analysis.
In the group-level analysis, we investigated how func-
tional connectivity patterns during successful encoding
in each facial expression were identified in each age group
of young and older adults. In the functional connectivity
analysis specific to the Happy-Hit condition, a one-sample
t test for the Happy-Hit contrasts was inclusively masked
by three contrasts of Happy-Hit > Happy-Miss, Happy-
Hit > Neutral-Hit, and Happy-Hit > Angry-Hit ( pag < .05).
Functional connectivity specific to the Angry-Hit condition
was analyzed in a one-sample t test for the Angry-Hit con-
trasts, which was masked inclusively by contrasts of Angry-
Hit > Angry-Miss, Angry-Hit > Neutral-Hit, and Angry-Hit
> Happy-Hit ( pag < .05). The same procedures of statistical
analysis for the PPI regressor contrast images were
employed to find significant functional connectivity spe-
cific to the Neutral-Hit condition. In these analyses, the
height threshold at the voxel level ( p < .001) was cor-
rected for multiple comparisons in HC ROI (Amunts
et al., 2005) (FWE, p < .05) with a minimum cluster size
of two voxels.
RESULTS
Behavioral Results
Table 3 summarizes young and older adults’ behavioral
data during (1) the encoding phase (RTs), (2) the retrieval
phase (accuracy and RTs), and (3) the arousal/valence
rating phase.
Encoding
Encoding RTs. These RTs correspond to the task of judg-
ing facial expressions of happy, neutral, or angry faces.
Encoding RTs were analyzed with three-way mixed ANO-
VAs with factors of Age Group (Young and Old), Facial
Expression (Happy, Neutral, and Angry), and Subsequent
Memory Performance (subsequent Hit and subsequent
Miss). Post hoc tests in all analyses used the Bonferroni
method. The ANOVA on RTs showed significant main
effects of Facial Expression, F(2, 106) = 22.98, p < .001,
ηp
2 = .30, and Subsequent Memory Performance, F(1,
53) = 6.23, p = .016, ηp
2 = .11, as well as reliable interac-
tions between Facial Expression and Subsequent Memory
Performance, F(2, 106) = 3.52, p = .033, ηp
2 = .06, and
between Age Group, Facial Expression, and Subsequent
Memory Performance, F(2, 106) = 5.14, p = .007, ηp
2 =
.09. The remaining main effect and interactions were not
significant. Post hoc tests for young adults showed that
RTs for happy facial expressions were significantly faster
than those for angry facial expressions in the subsequent
Miss trials ( p < .001), whereas RTs in the subsequent Hit
trials did not show significant differences among any facial
expressions. Post hoc tests for older adults demonstrated
that RTs for happy facial expressions were significantly fas-
ter than those for angry facial expressions in the subse-
quent Hit trials ( p = .017), and that RTs for happy ( p =
.010) and neutral ( p = .002) facial expressions were signif-
icantly faster than those for angry facial expressions in the
subsequent Miss trials. Significant difference of RTs
between the subsequent Hit and Miss trials was not found
in any facial expressions.
Retrieval
Accuracy. Recall accuracies for facial expressions were
defined as the proportion of the Hit trials to the Hit trials
for names, and were analyzed with a two-way mixed
ANOVA with factors of Age Group (Young and Old) and
Izumika, Cabeza, and Tsukiura
1191
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
/
j
/
o
c
n
a
r
t
i
c
e
-
p
d
l
f
/
/
/
3
4
7
1
1
8
3
2
0
3
4
4
7
2
/
/
j
o
c
n
_
a
_
0
1
8
5
1
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
1
1
9
2
J
o
u
r
n
a
l
o
f
C
o
g
n
i
t
i
v
e
N
e
u
r
o
s
c
i
e
n
c
e
V
o
l
u
m
e
3
4
,
N
u
m
b
e
r
7
Table 3. Behavioral Results
Encoding
Response time (msec)
Happy
Young (SD)
Neutral
Angry
Happy
Old (SD)
Neutral
Angry
Subsequent hit
1601.47 (311.69)
1618.97 (297.08)
1694.30 (348.17)
1603.63 (328.74)
1643.79 (394.56)
1751.27 (342.78)
Subsequent miss
1542.95 (267.28)
1676.42 (331.24)
1783.35 (325.37)
1653.63 (356.20)
1634.94 (368.10)
1806.16 (335.80)
Retrieval
Proportion of recall accuracy for facial expressions
Hit/hit for names
0.62 (0.16)
0.64 (0.20)
0.62 (0.18)
0.45 (0.17)
0.46 (0.12)
0.38 (0.14)
Proportion of recognition accuracy for names
Hit for names
Miss for names
FA for names
CR for names
Number of trialsa
Hit
Miss
FA for names
CR for names
Response time (msec)
Hit
Miss
FA for names
CR for names
Rating scores
Emotional arousal
Emotional valence
0.78 (0.17)
0.22 (0.17)
18.96 (7.35)
19.56 (6.85)
0.75 (0.16)
0.25 (0.16)
0.18 (0.17)
0.82 (0.17)
17.74 (6.81)
19.52 (7.26)
6.59 (5.92)
31.59 (7.57)
0.74 (0.16)
0.26 (0.16)
0.90 (0.12)
0.10 (0.12)
17.19 (7.88)
19.19 (6.57)
15.29 (6.23)
22.46 (6.48)
0.89 (0.11)
0.11 (0.11)
12.07 (5.22)
23.54 (5.90)
0.89 (0.10)
0.11 (0.09)
0.56 (0.27)
0.44 (0.27)
15.68 (4.06)
22.64 (3.80)
21.07 (9.97)
16.75 (10.65)
1868.34 (290.11)
2038.44 (342.97)
1948.71 (255.42)
2098.08 (364.88)
2280.68 (342.20)
2222.67 (428.80)
2252.41 (322.30)
2204.01 (327.05)
2244.64 (359.06)
2398.73 (461.66)
2326.42 (467.67)
2383.47 (418.55)
2534.43 (405.88)
2428.22 (448.14)
1773.10 (352.89)
2151.80 (425.32)
6.15 (0.97)
7.28 (0.57)
1.75 (0.56)
4.94 (0.18)
6.07 (0.96)
2.83 (0.52)
6.42 (0.95)
7.52 (0.51)
2.92 (1.75)
4.83 (0.40)
6.46 (1.21)
2.51 (0.57)
SD = standard deviation; FA = false alarm; CR = correct rejection.
a The Hit trial was defined as the correct remembering of facial expressions associated with learned names, and the Miss trial included the correct recognition of names (incorrect remembering of facial
expressions associated with learned names, and the “Unknown” responses to learned names) and incorrect recognition of names (the “New” responses to learned names).
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
/
j
/
o
c
n
a
r
t
i
c
e
-
p
d
l
f
/
/
/
3
4
7
1
1
8
3
2
0
3
4
4
7
2
/
/
j
o
c
n
_
a
_
0
1
8
5
1
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
Facial Expression (Happy, Neutral, and Angry). The ANOVA
demonstrated a significant main effect of Age Group, F(1,
53) = 31.43, p < .001, ηp
2 = .37, but not a main effect of
Facial Expression, F(2, 106) = 2.73, p = .070, ηp
2 = .05,
and an interaction between Age Group and Facial Expres-
sion, F(2, 106) = 1.14, p = .323, ηp
2 = .02. The recall accu-
racies are illustrated in Figure 2, which shows a clear facial
expression memory deficit in older adults compared with
young adults.
Recognition accuracies for names, which were defined
as the proportion of the Hit trials for names learned in the
encoding run to all trials for learned names, were analyzed
with a two-way mixed ANOVA with factors of Age Group
(Young and Old) and Facial Expression (Happy, Neutral,
and Angry). In this ANOVA, a main effect of Age Group
was significant, F(1, 53) = 15.39, p < .001, ηp
2 = .23,
but not a main effect of Facial Expression, F(2, 106) =
2.05, p = .134, ηp
2 = .04, and an interaction between Age
Group and Facial Expression, F(2, 106) = 0.72, p = .490,
ηp
2 = .00.
Retrieval RTs. Retrieval RTs were also analyzed with
three-way mixed ANOVAs with factors of Age Group
(Young and Old), Facial Expression (Happy, Neutral, and
Angry), and Memory Performance (Hit and Miss). The
ANOVA showed significant main effects of Age Group,
F(1, 53) = 4.76, p = .034, ηp
2 = .08; Facial Expression,
F(2, 106) = 4.95, p = .009, ηp
2 = .09; Memory Perfor-
mance, F(1, 53) = 50.31, p < .001, ηp
2 = .49; and a
significant interaction between Facial Expression and
Memory Performance, F(2, 106) = 12.32, p < .001, ηp
2 =
.19. The other interactions were not significant. Post hoc
tests showed that happy facial expressions were remem-
bered faster than neutral ( p < .001) and angry ( p =
.015) facial expressions only in the Hit trials. In post hoc
tests, we also found that happy and angry facial expressions
were remembered faster in the Hit trials than in the Miss
trials ( p < .001). However, a significant difference of RTs
between the Hit and Miss trials was not identified in neutral
facial expressions. The RT results reflected that the enhanc-
ing retrieval of happy facial expressions was observed com-
monly in both young and older adults (see Figure 2).
Ratings
Arousal and valence rating scores were analyzed using two-
way mixed ANOVAs with factors of Age Group (Young and
Old) and Facial Expression (Happy, Neutral, and Angry)
separately for arousal and valence ratings. The ANOVA
on arousal ratings revealed significant main effects of
Age Group, F(1, 53) = 7.79, p = .007, ηp
2 = .13, and Facial
Expression, F(2, 106) = 306.00, p < .001, ηp
2 = .85, as well
as a reliable interaction between them, F(2, 106) = 3.61,
p = .031, ηp
2 = .06. Post hoc tests showed that happy
and angry faces were rated as being more arousing than
neutral faces ( p <. 001 in both contrasts) and that neutral
faces were rated as being more arousing by older adults
than by young adults ( p = .003). The ANOVA on valence
ratings yielded a nonsignificant main effect of Age Group,
F(1, 53) = 1.38, p = .246, ηp
2 = .03, a reliable main effect of
Facial Expression, F(2, 106) = 1078.49, p < .001, ηp
2 = .95,
and a significant interaction between Age Group and Facial
Expression, F(2, 106) = 3.84, p = .025, ηp
2 = .07. Post hoc
tests showed happy faces were rated as being more posi-
tive than neutral and angry faces, and that neutral faces
were rated as being more positive than angry faces in both
age groups ( p <. 001 in all contrasts). In post hoc tests,
no significant difference between young and older adults
was not found in any facial expressions.
fMRI Results
Univariate Analysis
In the univariate analysis, using ANOVA, we found signifi-
cantly greater activation in pSTS, FFA, and AMY during the
Figure 2. Behavioral results of retrieval performance and response time in the Hit trials (correct retrieval of facial expressions associated with learned
names) during retrieval. (A) Recall accuracy for facial expressions (Hit trials/ Hit trials for names). (B) Response time in the Hit trials during retrieval.
Error bars represent standard errors.
Izumika, Cabeza, and Tsukiura
1193
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
/
j
/
o
c
n
a
r
t
i
c
e
-
p
d
l
f
/
/
/
3
4
7
1
1
8
3
2
0
3
4
4
7
2
/
/
j
o
c
n
_
a
_
0
1
8
5
1
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
processing of angry facial expression than that of other
facial expressions, and activity in AMY also significantly
increased in both angry and happy facial expressions com-
pared with neutral facial expression. However, activity in
these regions did not reflect a main effect of age group,
and an interaction of age group with facial expression
was not significant.
Encoding-related activation in the memory task was
analyzed with a two-way mixed ANOVA with factors of
Age Group (Young and Old) and Facial Expression
(Happy, Neutral, and Angry). Three types of analysis
were performed (see Methods section), and their results
are displayed in Figure 3 and Table 4. In the first analy-
sis, which focused on expression-specific effects, the right
pSTS, F(2, 106) = 19.02, p < .001, ηp
2 = .26; F(2, 106) =
16.49, p < .001, ηp
2 = .24; right FFA, F(2, 106) = 14.02,
p < .001, ηp
2 = .21; and bilateral AMY [left AMY: F(2,
106) = 16.23, p < .001, ηp
2 = .23; right AMY: F(2, 106) =
16.02, p < .001, ηp
2 = .23] showed activation that was sig-
nificantly greater for angry expressions than for both
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
/
j
/
o
c
n
a
r
t
i
c
e
-
p
d
l
f
/
/
/
3
4
7
1
1
8
3
2
0
3
4
4
7
2
/
/
j
o
c
n
_
a
_
0
1
8
5
1
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
Figure 3. Results of univariate analysis. (A) Regions showing significantly greater activation in angry facial expression than in happy and neutral facial
expression. (B) Regions showing significantly greater activation in happy and angry facial expression than in neutral expression. The parameter
estimates in graphs were extracted from peak voxels in each region. Error bars represent standard errors. Hap = happy facial expression; Neu =
neutral facial expression; Ang = angry facial expression.
1194
Journal of Cognitive Neuroscience
Volume 34, Number 7
Table 4. Regions Showing Significant Activation
MNI coordinates
Regions
L/R
BA
x
y
z
Z Value
k
Main effect of facial expression (masked inclusively by Angry > Happy & Angry > Neutral)
ROI-based analysis (OFC, pSTS, FFA, OFA, and AMY)
Middle temporal gyrus (pSTS)
Middle temporal gyrus (pSTS)
Fusiform gyrus (FFA)
AMY
AMY
R
R
R
l
R
21/37
21/22
37
52
52
43
−21
23
−50
−37
−48
−6
−6
3
3
−15
−17
−17
5.22
4.86
4.47
4.82
4.79
39
23
2
7
5
Main effect of facial expression (masked inclusively by Happy > Angry & Happy > Neutral)
ROI-based analysis (OFC, pSTS, FFA, OFA, and AMY)
No significant activation was identified
Main effect of facial expression (masked inclusively by Angry > Neutral & Happy > Neutral)
ROI-based analysis (OFC, pSTS, FFA, OFA, and AMY)
AMY
R
23
−6
−17
4.79
4
Main effect of age group (masked inclusively by young > old)
ROI-based analysis (OFC, pSTS, FFA, OFA, and AMY)
No significant activation was identified
Interaction between facial expression and group
ROI-based analysis (OFC, pSTS, FFA, OFA, and AMY)
No significant activation was identified
BA = Brodmann area; k = cluster size; L = left; R = right.
Cifra 4. (A) ROI image used in MVPA (colored blue). (B) Multivariate classification accuracy (balanced accuracy) for facial expressions during the
encoding phase in each ROI. Error bars represent standard errors, and the dotted line represents chance-level classification accuracy (50%). * =
significant results by permutation tests (FDR, q < .05); Hap = happy facial expression; Neu = neutral facial expression; Ang = angry facial expression.
Izumika, Cabeza, and Tsukiura
1195
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
/
j
/
o
c
n
a
r
t
i
c
e
-
p
d
l
f
/
/
/
3
4
7
1
1
8
3
2
0
3
4
4
7
2
/
/
j
o
c
n
_
a
_
0
1
8
5
1
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
happy and neutral expressions. In addition, the right AMY
displayed greater activity for both happy and angry facial
expressions than for neutral facial expressions, consistent
with arousal rating scores, F(2, 106) = 16.02, p < .001,
ηp
2 = .23. These results of significant activation were
corrected for multiple comparisons in the hypothesis-
driven ROI (FWE, p < .05). No significant activation was
identified in happy facial expressions compared with the
other facial expressions. In the second analysis, which
focused on age-related differences in activation, and in
the third analysis, which focused on interaction between
age group and facial expression, no significant activation
was found in any region.
MVPA
The MVPA analysis demonstrated that discrimination
between facial expressions by activity patterns in pSTS
was significantly accurate in both age groups, whereas in
FFA and OFC activity patterns, significant classification
accuracies to discriminate between facial expressions
were found only in young adults. In AMY, classification
accuracies to discriminate between facial expressions
were not significant in both young and older adults.
The MVPA results are displayed in Figure 4 and Table 5.
The accuracy of MVPA in classifying facial expressions dur-
ing the encoding phase was separately analyzed in four
ROIs: OFC, right pSTS, right FFA, and AMY. All significant
results were corrected by the FDR (q < .05) to control
false-positives (Benjamini & Hochberg, 1995). The accu-
racy scores (BA) in bilateral OFC showed that activation
patterns in this region could successfully classify happy
versus angry faces in both young and older adults (Young:
p < .014; Old: p < .001). In right pSTS, activation patterns
successfully distinguish angry versus happy faces (Young:
p < .001; Old: p < .001), and angry versus neutral faces
(Young: p < .004; Old: p < .001) in both age groups. In
Table 5. MVPA Results of Balanced Accuracies and p Values in Each ROI
Balanced accuracy (SD)
Permutation test
One-sample t test
Young
Old
Young
Old
Young
Old
p value
ROI
OFC
Happy vs. Neutral
Happy vs. Angry
Neutral vs. Angry
0.55 (0.05)
0.53 (0.08)
0.52 (0.08)
0.52 (0.07)
0.54 (0.09)
0.52 (0.05)
Right pSTS
Happy vs. Neutral
Happy vs. Angry
Neutral vs. Angry
Right FFA
Happy vs. Neutral
Happy vs. Angry
Neutral vs. Angry
AMY
Happy vs. Neutral
Happy vs. Angry
Neutral vs. Angry
SD = standard deviation.
0.52 (0.06)
0.56 (0.07)
0.54 (0.06)
0.51 (0.05)
0.55 (0.06)
0.57 (0.05)
0.50 (0.06)
0.53 (0.05)
0.53 (0.06)
0.50 (0.08)
0.50 (0.07)
0.51 (0.07)
0.51 (0.06)
0.50 (0.05)
0.49 (0.06)
0.50 (0.06)
0.52 (0.07)
0.50 (0.06)
< .001a
.014a
.108
.065
< .001a
.004a
.604
.016a
.013a
.182
.464
.757
.037
< .001a
.044
.163
< .001a
< .001a
.396
.416
.189
.395
.080
.430
< .001
.029
.176
.039
< .001
.001
.625
.004
.007
.169
.473
.797
.040
.006
.019
.084
< .001
< .001
.431
.454
.214
.398
.068
.454
a Significant results after FDR correction for the results of the permutation tests (q < .05). p values are shown before the FDR correction
(uncorrected).
1196
Journal of Cognitive Neuroscience
Volume 34, Number 7
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
/
j
/
o
c
n
a
r
t
i
c
e
-
p
d
l
f
/
/
/
3
4
7
1
1
8
3
2
0
3
4
4
7
2
/
/
j
o
c
n
_
a
_
0
1
8
5
1
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
/
j
/
o
c
n
a
r
t
i
c
e
-
p
d
l
f
/
/
/
3
4
7
1
1
8
3
2
0
3
4
4
7
2
/
/
j
o
c
n
_
a
_
0
1
8
5
1
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
Figure 5. Regions showing significant functional connectivity during successful encoding. (A) Functional connectivity between the HC and left OFC
for happy facial expressions in young adults (Z value = 4.17) and older adults (Z value = 4.10). (B) Functional connectivity between the HC and right
pSTS for happy facial expressions in older adults (left: Z value = 4.02; right: Z value = 4.31). (C) Functional connectivity between the HC and right
FFA for happy (Z value = 4.82) and neutral facial expressions (Z value = 5.63) in young adults.
contrast, only young adults displayed activation patterns
that could accurately classify happy versus neutral faces
in OFC ( p < .001), and angry versus happy ( p < .016)
or neutral faces ( p < .013) in right FFA. Finally, neither
age groups displayed significant classification accuracy of
facial expressions in AMY.
Functional Connectivity Analysis
In the functional connectivity analysis, we found that func-
tional connectivity reflecting subsequent recollection of
facial expressions was significant between HC and OFC
for happy facial expressions in both age groups, between
HC and FFA for happy and neutral facial expressions only
in young adults, and between HC and pSTS for happy facial
expressions only in older adults.
Previous studies have shown that successful memory
encoding of faces is associated with increased functional
connectivity between a critical region for the recollection
of episodic memories, HC (for review, see Diana et al.,
2007; Eichenbaum et al., 2007; Davachi, 2006), and corti-
cal regions involved in the processing faces and their
expressions (Tsukiura & Cabeza, 2008, 2011; Dennis
et al., 2008). Thus, for each age group and each facial
expression, we assessed functional connectivity (gPPI)
predicting subsequent recollection of facial expressions
associated with names between HC and three cortical
ROIs, left OFC, right pSTS, and right FFA (seeds
identified in the functional localizer task). Results of the
functional connectivity analysis are illustrated in
Figure 5, including Z values and coordinates of the regions
showing the effects. In left OFC, significant functional
connectivity with HC was found for happy facial expres-
sions in both age groups [Young: t(25) = 5.08, p < .001,
d = 1.00; Old: t(26) = 4,92, p < .001, d = 0.95]. In right
pSTS, reliable functional connectivity with HC was
observed for happy facial expressions only in the old
group [left HC: t(27) = 4.77, p < .001, d = 0.90; right
HC: t(27) = 5.23, p < .001, d = 0.99]. Finally, in FFA,
significant functional connectivity with HC was found for
both happy, t(26) = 6.21, p < .001, d = 1.20, and neutral
facial expressions, t(26) = 8.00, p < .001, d = 1.54, only
in young adults. These results of significant functional
connectivity were corrected for multiple comparisons
in HC ROI (FWE, p < .05).
DISCUSSION
In terms of age effects, two sets of findings emerged from
the present study. First, during the processing of facial
expressions, univariate activity and MVPA discrimination
in pSTS and AMY were similar in both young and older
adults, whereas MVPA activity patterns in FFA and OFC
discriminated facial expressions less accurately in older
than young adults. These results suggest that neural
Izumika, Cabeza, and Tsukiura
1197
representations of facial expressions in FFA and OFC are
affected by the age-related dedifferentiation and that
activity patterns in OFC reflect the age-related positivity
effects. Second, functional connectivity predicting subse-
quent face recollection was significant between HC and
OFC for happy facial expressions in both age groups,
between HC and FFA for happy and neutral facial expres-
sions only in young adults, and between HC and pSTS for
happy facial expressions only in older adults. Some of
these results suggest the compensatory mechanisms
and positivity effects in older adults. These two sets of
findings are discussed in separate sections below.
Univariate and MVPA Results during the Perception
of Emotional Facial Expressions
The first set of findings was that univariate activity and mul-
tivariate activity patterns in pSTS and AMY during the pro-
cessing of facial expressions were similar in young and
older adults, whereas in FFA and OFC, multivariate activity
patterns discriminated facial expressions less accurately in
older than young adults. These findings suggest that the
contributions of pSTS and AMY to the processing of facial
expressions are relatively preserved in older adults,
whereas the representations of facial expressions in FFA
and OFC are affected by the dedifferentiation in older
adults.
In univariate analyses, we found that for both young and
older adults, AMY activity was enhanced by both happy
and angry facial expressions, and that pSTS and FFA activ-
ity was increased by angry facial expressions. These find-
ings are consistent with previous cognitive neuroscience
studies. For example, AMY shows significantly greater
activity for highly arousing faces than for neutral faces
( Winston, O’Doherty, & Dolan, 2003; Yang et al., 2002;
Breiter et al., 1996), and AMY lesions reliably impairs the
perception of negative facial expressions (Sato et al., 2002;
Adolphs, Tranel, Damasio, & Damasio, 1994). The involve-
ment of pSTS and FFA in the processing of emotional facial
expressions has been also identified by prior studies
(Sormaz, Watson, Smith, Young, & Andrews, 2016; Zhang
et al., 2016; Wegrzyn et al., 2015; Harry et al., 2013; Said
et al., 2010). The absence of age effects in univariate
analysis is consistent with evidence that the ability to
discriminate facial expressions (Murphy, Millgate, Geary,
Catmur, & Bird, 2019; D’Argembeau & van der Linden,
2004), the utilization of visual cues to discriminate facial
expressions (Smith et al., 2018), and the neural mecha-
nisms of the processing of facial expressions (Goncalves
et al., 2018) are relatively preserved in older adults.
MVPA results showed that activity patterns in FFA suc-
cessfully classified facial expressions in young but not in
older adults. The finding of significant MVPA classification
in young adults fits with abundant evidence of the impor-
tance of FFA for processing facial expressions, including
functional neuroimaging (Zhao et al., 2020; Wegrzyn
et al., 2015; Skerry & Saxe, 2014; Harry et al., 2013; Fox,
Moon, Iaria, & Barton, 2009) and prosopagnosia (Bentin,
Degutis, D’Esposito, & Robertson, 2007) findings. The
failure to distinguish facial expressions by FFA activity
patterns in older adults agrees with evidence that FFA rep-
resentations for facial identity display the dedifferentiation
in older adults (Lee et al., 2011; Goh et al., 2010). Func-
tional neuroimaging results suggest that FFA represents
the morphological difference conveyed by facial identity
(for review, see Bernstein & Yovel, 2015). Thus, one pos-
sibility is that impaired representations of facial expres-
sions in older adults stem from a deficit in processing facial
identity, which is a well-known deficit in older adults
(Chaby, Narme, & George, 2011; Habak, Wilkinson, &
Wilson, 2008; Boutet & Faubert, 2006). In the present
study, we assessed the discrimination of facial expressions
but not the discrimination of facial identities. A future
study that examines both types of discrimination in the
same participants could investigate the hypothesis that
deficits in the two abilities interact in older adults.
In OFC, activity patterns in older adults distinguished
between happy and angry facial expressions but not
between happy and neutral facial expressions, whereas
activity patterns in young adults allowed both distinctions.
The finding that OFC representations in older adults could
not distinguish happy facial expressions from emotionally
ambiguous neutral facial expressions is consistent with the
positivity effect, which has observed in a previous study as
emotionally ambiguous stimuli being rated more posi-
tively in older than young adults (Zebrowitz et al., 2017).
It is a topic of debate whether the positivity effect in older
adults reflects motivational differences in allocating atten-
tion to information or more basic deficits in the processing
mechanisms of emotions (for review, see Ruffman, Henry,
Livingstone, & Phillips, 2008; Mather & Carstensen, 2005).
In terms of the latter perspective, the present finding of
age-related dedifferentiation in OFC is consistent with evi-
dence that this region is anatomically impaired by aging
(Shen et al., 2013; Salat et al., 2009; Lamar & Resnick,
2004; Tisserand et al., 2002). This OFC deficit might lead
to a positive shift in older adults. This alternative is con-
sistent with the finding that faces with negative facial
expressions were rated as more approachable by patients
with OFC lesions than controls ( Willis, Palermo, Burke,
McGrillen, & Miller, 2010).
Functional Connectivity Predicting Subsequent
Recollection of Facial Expressions
The second set of findings was that encoding-related func-
tional connectivity was found between HC and OFC for
happy faces in both age groups, between HC and pSTS
for happy faces only in older adults, and between HC
and FFA for happy and neutral faces only in young adults.
Before discussing these findings, it is worth mentioning
that angry facial expressions did not modulate encoding-
related functional connectivity between HC and cortical
regions, consistent with the lack of enhanced memory
1198
Journal of Cognitive Neuroscience
Volume 34, Number 7
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
/
j
/
o
c
n
a
r
t
i
c
e
-
p
d
l
f
/
/
/
3
4
7
1
1
8
3
2
0
3
4
4
7
2
/
/
j
o
c
n
_
a
_
0
1
8
5
1
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
for angry faces. These results could be related to the use of
a cued-recall test that does not display faces during
retrieval, unlike the recognition test used in some studies.
Consistent with this idea, several studies in which emo-
tional faces were presented during retrieval found that
angry expressions enhanced face memory performance
(Keightley, Chiew, Anderson, & Grady, 2011; Sergerie,
Lepage, & Armony, 2005; Foa, Gilboa-Schechtman, Amir,
& Freshman, 2000), whereas others in which emotional
faces were not presented during retrieval (e.g., cued-recall
or source memory test) did not find the angry-related
memory enhancement (Bowen & Kensinger, 2017;
D’Argembeau & van der Linden, 2007; Shimamura, Ross,
& Bennett, 2006; Fenker, Schott, Richardson-Klavehn,
Heinze, & Duzel, 2005).
The finding that HC–OFC interactions contributed to
the encoding of happy faces is consistent with our previ-
ous results (Tsukiura & Cabeza, 2008). As noted before,
happy facial expressions have rewarding values in a social
context (Hayward, Pereira, Otto, & Ristic, 2018; Yang &
Urminsky, 2018) and OFC is involved in the processing
of rewards (for review, see O’Doherty, 2004). Moreover,
studies on the enhancement of episodic memory by
monetary or social rewards have linked this enhancement
to functional connectivity between HC and OFC
(Sugimoto et al., 2021; Frank, Preston, & Zeithamova,
2019; Tsukiura & Cabeza, 2008, 2011; Shigemune et al.,
2010). Thus, the present study replicates these literatures
by showing that HC–OFC interactions contribute to
memory for happy facial expressions in both young and
older adults.
In contrast with HC–OFC interactions, functional con-
nectivity between HC and FFA contributed to memory
for happy and neutral facial expressions in young but not
older adults. The contribution of HC–FFA interactions for
face and face-related association memories in young
adults have been reported in several studies (Liu, Grady,
& Moscovitch, 2018; Summerfield et al., 2006; Sperling
et al., 2003). Age-related reductions in functional connec-
tivity between HC and FFA during encoding are consistent
with a study on memory for face–scene associations
(Dennis et al., 2008). It is possible that age-related
decrease in HC–FFA interactions cause the impairment
of face-related memories in older adults.
Finally, encoding-related functional connectivity of HC
with pSTS was significant for happy facial expressions only
in older adults. The additional contribution of pSTS in
older adults could reflect a compensatory mechanism in
older adults. Several previous studies have demonstrated
that higher levels of neural activity or functional connectiv-
ity play a compensatory role in older adults (for review, see
Cabeza et al., 2018; Sala-Llonch, Bartres-Faz, & Junqué,
2015; Cabeza, 2002). For example, a compensatory func-
tional connectivity between the medial temporal lobe
and PFC was recruited in older adults during both encod-
ing and retrieval of episodic memories (Dennis et al., 2008;
Daselaar, Fleck, Dobbins, Madden, & Cabeza, 2006). In
another fMRI study, interacting mechanisms between
HC and ventromedial PFC during the successful encoding
of emotionally positive pictures were significantly more
active in older adults than in young adults (Addis et al.,
2010). Thus, the encoding-related HC-pSTS functional
connectivity for happy facial expressions in older adults
could reflect the age-dependent compensatory mecha-
nisms for positive socioemotional values, an effect related
to the positivity effect.
Conclusion
In the present event-related fMRI study, we investigated
age-related differences in neural representations and func-
tional connectivity during the perception and subsequent
memory of emotional facial expressions associated with
names. First, during the perception of emotional facial
expressions, univariate activity and multivariate activity
patterns in pSTS and AMY were similar in young and older
adults, whereas multivariate activity patterns in FFA and
OFC classified facial expressions less accurately in older
adults. The latter results suggest that neural representa-
tions of facial expressions in FFA and OFC are affected
by age-related dedifferentiation, and that activity patterns
in OFC reflect the positivity effect, which is a tendency
to interpret neutral facial expressions as emotionally pos-
itive expressions in older adults. Second, recollection-
predicting functional connectivity was found between
HC and OFC for happy facial expressions in both age
groups, between HC and FFA for happy and neutral facial
expressions only in young adults, and between HC and
pSTS for happy facial expressions only in older adults.
These findings could reflect compensatory mechanisms
and positivity effects in older adults. Taken together, the
results in the present study clarify the effects of aging
on neural representations and mechanisms during per-
ceiving and encoding facial expressions.
Acknowledgments
We would like to thank Drs. Nobuhito Abe, Kohei Asano, and
Ryusuke Nakai, and Mses. Aiko Murai, Maki Terao, and Saeko
Iwata for their technical assistance in the MRI scanning and data
analysis. This work was supported by JSPS KAKENHI grant
numbers JP18H04193 (T. T.) and JP20H05802 (T. T.). The
research experiments were conducted using an MRI scanner
and related facilities at Kokoro Research Center, Kyoto Univer-
sity. The authors declare no competing financial interests.
Reprint requests should be sent to Takashi Tsukiura, Depart-
ment of Cognitive and Behavioral Sciences, Graduate School
of Human and Environmental Studies, Kyoto University,
Yoshida-Nihonmatsu-Cho, Sakyo-ku, Kyoto 606-8501, Japan,
or via e-mail: tsukiura.takashi.6c@kyoto-u.ac.jp.
Author Contributions
Reina Izumika: Conceptualization; Data curation; Formal
analysis; Investigation; Methodology; Software; Validation;
Izumika, Cabeza, and Tsukiura
1199
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
/
j
/
o
c
n
a
r
t
i
c
e
-
p
d
l
f
/
/
/
3
4
7
1
1
8
3
2
0
3
4
4
7
2
/
/
j
o
c
n
_
a
_
0
1
8
5
1
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
Visualization; Writing–Original draft. Roberto Cabeza:
Supervision; Visualization; Writing–Review & editing.
Takashi Tsukiura: Conceptualization; Formal analysis;
Funding acquisition; Investigation; Methodology; Project
administration; Software; Supervision; Validation; Visuali-
zation; Writing–Review & editing.
Funding Information
Takashi Tsukiura, Japan Society for the Promotion of Sci-
ence (https://dx.doi.org/10.13039/501100001691), grant
number: JP18H04193. Takashi Tsukiura, Japan Society
for the Promotion of Science (https://dx.doi.org/10
.13039/501100001691), grant number: JP20H05802.
Diversity in Citation Practices
Retrospective analysis of the citations in every article pub-
lished in this journal from 2010 to 2021 reveals a persistent
pattern of gender imbalance: Although the proportions of
authorship teams (categorized by estimated gender iden-
tification of first author/last author) publishing in the Jour-
nal of Cognitive Neuroscience ( JoCN) during this period
were M(an)/M = .407, W(oman)/M = .32, M/ W = .115,
and W/ W = .159, the comparable proportions for the arti-
cles that these authorship teams cited were M/M = .549,
W/M = .257, M/ W = .109, and W/ W = .085 (Postle and
Fulvio, JoCN, 34:1, pp. 1–3). Consequently, JoCN encour-
ages all authors to consider gender balance explicitly when
selecting which articles to cite and gives them the oppor-
tunity to report their article’s gender citation balance.
REFERENCES
Addis, D. R., Leclerc, C. M., Muscatell, K. A., & Kensinger, E. A.
(2010). There are age-related changes in neural connectivity
during the encoding of positive, but not negative, information.
Cortex, 46, 425–433. https://doi.org/10.1016/j.cortex.2009.04
.011, PubMed: 19555933
Adolphs, R., Tranel, D., Damasio, H., & Damasio, A. (1994).
Impaired recognition of emotion in facial expressions
following bilateral damage to the human amygdala. Nature,
372, 669–672. https://doi.org/10.1038/372669a0, PubMed:
7990957
Amunts, K., Kedo, O., Kindler, M., Pieperhoff, P., Mohlberg, H.,
Shah, N. J., et al. (2005). Cytoarchitectonic mapping of the
human amygdala, hippocampal region and entorhinal cortex:
Intersubject variability and probability maps. Anatomy and
Embryology, 210, 343–352. https://doi.org/10.1007/s00429
-005-0025-5, PubMed: 16208455
Baltes, P. B., & Lindenberger, U. (1997). Emergence of a
powerful connection between sensory and cognitive
functions across the adult life span: A new window to the
study of cognitive aging? Psychology and Aging, 12, 12–21.
https://doi.org/10.1037//0882-7974.12.1.12, PubMed: 9100264
Benjamini, Y., & Hochberg, Y. (1995). Controlling the false
discovery rate: A practical and powerful approach to multiple
testing. Journal of the Royal Statistical Society. Series B
(Methodological), 57, 289–300. https://doi.org/10.1111/j.2517
-6161.1995.tb02031.x
Bentin, S., Degutis, J. M., D’Esposito, M., & Robertson, L. C.
(2007). Too many trees to see the forest: Performance,
event-related potential, and functional magnetic resonance
imaging manifestations of integrative congenital prosopagnosia.
Journal of Cognitive Neuroscience, 19, 132–146. https://doi
.org/10.1162/jocn.2007.19.1.132, PubMed: 17214570
Bernstein, M., & Yovel, G. (2015). Two neural pathways of
face processing: A critical evaluation of current models.
Neuroscience and Biobehavioral Reviews, 55, 536–546. https://
doi.org/10.1016/j.neubiorev.2015.06.010, PubMed: 26067903
Binney, R. J., Embleton, K. V., Jefferies, E., Parker, G. J., & Ralph,
M. A. (2010). The ventral and inferolateral aspects of the
anterior temporal lobe are crucial in semantic memory:
Evidence from a novel direct comparison of distortion-
corrected fMRI, rTMS, and semantic dementia. Cerebral
Cortex, 20, 2728–2738. https://doi.org/10.1093/cercor
/bhq019, PubMed: 20190005
Bolla, K. I., Lindgren, K. N., Bonaccorsy, C., & Bleecker, M. L.
(1991). Memory complaints in older adults. Fact or fiction?
Archives of Neurology, 48, 61–64. https://doi.org/10.1001
/archneur.1991.00530130069022, PubMed: 1986728
Boutet, I., & Faubert, J. (2006). Recognition of faces and
complex objects in younger and older adults. Memory and
Cognition, 34, 854–864. https://doi.org/10.3758/bf03193432,
PubMed: 17063916
Boutet, I., Taler, V., & Collin, C. A. (2015). On the particular
vulnerability of face recognition to aging: A review of three
hypotheses. Frontiers in Psychology, 6, 1139. https://doi.org
/10.3389/fpsyg.2015.01139, PubMed: 26347670
Bowen, H. J., & Kensinger, E. A. (2017). Recapitulation of
emotional source context during memory retrieval. Cortex,
91, 142–156. https://doi.org/10.1016/j.cortex.2016.11.004,
PubMed: 27923474
Breiter, H. C., Etcoff, N. L., Whalen, P. J., Kennedy, W. A., Rauch,
S. L., Buckner, R. L., et al. (1996). Response and habituation
of the human amygdala during visual processing of facial
expression. Neuron, 17, 875–887. https://doi.org/10.1016
/s0896-6273(00)80219-6, PubMed: 8938120
Cabeza, R. (2002). Hemispheric asymmetry reduction in older
adults: The HAROLD model. Psychology and Aging, 17,
85–100. https://doi.org/10.1037/0882-7974.17.1.85, PubMed:
11931290
Cabeza, R., Albert, M., Belleville, S., Craik, F. I. M., Duarte, A.,
Grady, C. L., et al. (2018). Maintenance, reserve and
compensation: The cognitive neuroscience of healthy ageing.
Nature Reviews Neuroscience, 19, 701–710. https://doi.org
/10.1038/s41583-018-0068-2, PubMed: 30305711
Chaby, L., Narme, P., & George, N. (2011). Older adults’
configural processing of faces: Role of second-order
information. Psychology and Aging, 26, 71–79. https://doi.org
/10.1037/a0020873, PubMed: 20973603
Cohen, G., & Faulkner, D. (1986). Memory for proper names:
Age differences in retrieval. British Journal of
Developmental Psychology, 4, 187–197. https://doi.org/10
.1111/j.2044-835X.1986.tb01010.x
Comblain, C., D’Argembeau, A., & van der Linden, M. (2005).
Phenomenal characteristics of autobiographical memories for
emotional and neutral events in older and younger adults.
Experimental Aging Research, 31, 173–189. https://doi.org
/10.1080/03610730590915010, PubMed: 15981795
Crook, T. H., & West, R. L. (1990). Name recall performance
across the adult life-span. British Journal of Psychology, 81,
335–349. https://doi.org/10.1111/j.2044-8295.1990.tb02365.x,
PubMed: 2224395
D’Argembeau, A., & van der Linden, M. (2004). Identity but
not expression memory for unfamiliar faces is affected by
ageing. Memory, 12, 644–654. https://doi.org/10.1080
/09658210344000198, PubMed: 15615321
1200
Journal of Cognitive Neuroscience
Volume 34, Number 7
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
/
j
/
o
c
n
a
r
t
i
c
e
-
p
d
l
f
/
/
/
3
4
7
1
1
8
3
2
0
3
4
4
7
2
/
/
j
o
c
n
_
a
_
0
1
8
5
1
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
D’Argembeau, A., & van der Linden, M. (2007). Facial
expressions of emotion influence memory for facial identity
in an automatic way. Emotion, 7, 507–515. https://doi.org/10
.1037/1528-3542.7.3.507, PubMed: 17683207
Daselaar, S. M., Fleck, M. S., Dobbins, I. G., Madden, D. J., &
Cabeza, R. (2006). Effects of healthy aging on hippocampal
and rhinal memory functions: An event-related fMRI study.
Cerebral Cortex, 16, 1771–1782. https://doi.org/10.1093
/cercor/bhj112, PubMed: 16421332
Davachi, L. (2006). Item, context and relational episodic encoding
in humans. Current Opinion in Neurobiology, 16, 693–700.
https://doi.org/10.1016/j.conb.2006.10.012, PubMed: 17097284
Deng, L., Davis, S. W., Monge, Z. A., Wing, E. A., Geib, B. R.,
Raghunandan, A., et al. (2021). Age-related dedifferentiation
and hyperdifferentiation of perceptual and mnemonic
representations. Neurobiology of Aging, 106, 55–67. https://
doi.org/10.1016/j.neurobiolaging.2021.05.021, PubMed:
34246857
Dennis, N. A., & Cabeza, R. (2011). Age-related dedifferentiation
of learning systems: An fMRI study of implicit and explicit
learning. Neurobiology of Aging, 32, 2318.e17–2318.e30.
https://doi.org/10.1016/j.neurobiolaging.2010.04.004,
PubMed: 20471139
Dennis, N. A., Hayes, S. M., Prince, S. E., Madden, D. J., Huettel,
S. A., & Cabeza, R. (2008). Effects of aging on the neural
correlates of successful item and source memory encoding.
Journal of Experimental Psychology: Learning, Memory,
and Cognition, 34, 791–808. https://doi.org/10.1037/0278
-7393.34.4.791, PubMed: 18605869
Dennis, N. A., Overman, A. A., Gerver, C. R., McGraw, K. E.,
Rowley, M. A., & Salerno, J. M. (2019). Different types of
associative encoding evoke differential processing in both
younger and older adults: Evidence from univariate and
multivariate analyses. Neuropsychologia, 135, 107240.
https://doi.org/10.1016/j.neuropsychologia.2019.107240,
PubMed: 31682927
Diana, R. A., Yonelinas, A. P., & Ranganath, C. (2007). Imaging
recollection and familiarity in the medial temporal lobe: A
three-component model. Trends in Cognitive Sciences, 11,
379–386. https://doi.org/10.1016/j.tics.2007.08.001, PubMed:
17707683
Ebner, N. C., Johnson, M. K., & Fischer, H. (2012). Neural
mechanisms of reading facial emotions in young and older
adults. Frontiers in Psychology, 3, 223. https://doi.org/10
.3389/fpsyg.2012.00223, PubMed: 22798953
Eichenbaum, H., Yonelinas, A. P., & Ranganath, C. (2007). The
medial temporal lobe and recognition memory. Annual
Review of Neuroscience, 30, 123–152. https://doi.org/10.1146
/annurev.neuro.30.051606.094328, PubMed: 17417939
Eickhoff, S. B., Heim, S., Zilles, K., & Amunts, K. (2006). Testing
anatomically specified hypotheses in functional imaging
using cytoarchitectonic maps. Neuroimage, 32, 570–582.
https://doi.org/10.1016/j.neuroimage.2006.04.204, PubMed:
16781166
Eickhoff, S. B., Paus, T., Caspers, S., Grosbras, M. H., Evans, A. C.,
Zilles, K., et al. (2007). Assignment of functional activations to
probabilistic cytoarchitectonic areas revisited. Neuroimage,
36, 511–521. https://doi.org/10.1016/j.neuroimage.2007.03
.060, PubMed: 17499520
Eickhoff, S. B., Stephan, K. E., Mohlberg, H., Grefkes, C., Fink,
G. R., Amunts, K., et al. (2005). A new SPM toolbox for
combining probabilistic cytoarchitectonic maps and
functional imaging data. Neuroimage, 25, 1325–1335. https://
doi.org/10.1016/j.neuroimage.2004.12.034, PubMed: 15850749
Etzel, J. A. (2017). MVPA significance testing when just above
chance, and related properties of permutation tests. In
2017 International Workshop on Pattern Recognition in
Neuroimaging, 1–4. https://doi.org/10.1109/PRNI.2017.7981498
Faul, F., Erdfelder, E., Lang, A. G., & Buchner, A. (2007).
G*Power 3: A flexible statistical power analysis program for
the social, behavioral, and biomedical sciences. Behavior
Research Methods, 39, 175–191. https://doi.org/10.3758
/bf03193146, PubMed: 17695343
Fenker, D. B., Schott, B. H., Richardson-Klavehn, A., Heinze, H. J.,
& Duzel, E. (2005). Recapitulating emotional context: Activity
of amygdala, hippocampus and fusiform cortex during
recollection and familiarity. European Journal of
Neuroscience, 21, 1993–1999. https://doi.org/10.1111/j.1460
-9568.2005.04033.x, PubMed: 15869492
Fjell, A. M., Walhovd, K. B., Fennema-Notestine, C., McEvoy, L. K.,
Hagler, D. J., Holland, D., et al. (2009). One-year brain atrophy
evident in healthy aging. Journal of Neuroscience, 29,
15223–15231. https://doi.org/10.1523/JNEUROSCI.3252-09
.2009, PubMed: 19955375
Foa, E. B., Gilboa-Schechtman, E., Amir, N., & Freshman, M. (2000).
Memory bias in generalized social phobia: Remembering
negative emotional expressions. Journal of Anxiety Disorders,
14, 501–519. https://doi.org/10.1016/s0887-6185(00)00036-0,
PubMed: 11095543
Foley, E., Rippon, G., Thai, N. J., Longe, O., & Senior, C. (2012).
Dynamic facial expressions evoke distinct activation in the
face perception network: A connectivity analysis study.
Journal of Cognitive Neuroscience, 24, 507–520. https://doi
.org/10.1162/jocn_a_00120, PubMed: 21861684
Fox, C. J., Iaria, G., & Barton, J. J. (2009). Defining the face
processing network: Optimization of the functional localizer
in fMRI. Human Brain Mapping, 30, 1637–1651. https://doi
.org/10.1002/hbm.20630, PubMed: 18661501
Fox, C. J., Moon, S. Y., Iaria, G., & Barton, J. J. (2009). The
correlates of subjective perception of identity and expression
in the face network: An fMRI adaptation study. Neuroimage,
44, 569–580. https://doi.org/10.1016/j.neuroimage.2008.09
.011, PubMed: 18852053
Frank, L. E., Preston, A. R., & Zeithamova, D. (2019). Functional
connectivity between memory and reward centers across task
and rest track memory sensitivity to reward. Cognitive,
Affective and Behavioral Neuroscience, 19, 503–522. https://
doi.org/10.3758/s13415-019-00700-8, PubMed: 30805850
Franklin, R. G., Jr., & Zebrowitz, L. A. (2017). Age differences in
emotion recognition: Task demands or perceptual
dedifferentiation? Experimental Aging Research, 43,
453–466. https://doi.org/10.1080/0361073X.2017.1369628,
PubMed: 29023209
Fujiwara, Y., Suzuki, H., Yasunaga, M., Sugiyama, M., Ijuin, M.,
Sakuma, N., et al. (2010). Brief screening tool for mild
cognitive impairment in older Japanese: Validation of the
Japanese version of the Montreal Cognitive Assessment.
Geriatrics and Gerontology International, 10, 225–232.
https://doi.org/10.1111/j.1447-0594.2010.00585.x, PubMed:
20141536
Gallo, D. A., Korthauer, L. E., McDonough, I. M., Teshale, S., &
Johnson, E. L. (2011). Age-related positivity effects and
autobiographical memory detail: Evidence from a past/future
source memory task. Memory, 19, 641–652. https://doi.org/10
.1080/09658211.2011.595723, PubMed: 21919591
Ghuman, A. S., Brunet, N. M., Li, Y., Konecky, R. O., Pyles, J. A.,
Walls, S. A., et al. (2014). Dynamic encoding of face information
in the human fusiform gyrus. Nature Communications, 5, 5672.
https://doi.org/10.1038/ncomms6672, PubMed: 25482825
Goh, J. O., Suzuki, A., & Park, D. C. (2010). Reduced neural
selectivity increases fMRI adaptation with age during face
discrimination. Neuroimage, 51, 336–344. https://doi.org/10
.1016/j.neuroimage.2010.01.107, PubMed: 20139012
Goncalves, A. R., Fernandes, C., Pasion, R., Ferreira-Santos, F.,
Barbosa, F., & Marques-Teixeira, J. (2018). Emotion
identification and aging: Behavioral and neural age-related
Izumika, Cabeza, and Tsukiura
1201
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
/
j
/
o
c
n
a
r
t
i
c
e
-
p
d
l
f
/
/
/
3
4
7
1
1
8
3
2
0
3
4
4
7
2
/
/
j
o
c
n
_
a
_
0
1
8
5
1
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
changes. Clinical Neurophysiology, 129, 1020–1029. https://
doi.org/10.1016/j.clinph.2018.02.128, PubMed: 29571120
Goodkind, M. S., Sollberger, M., Gyurak, A., Rosen, H. J.,
Rankin, K. P., Miller, B., et al. (2012). Tracking emotional
valence: The role of the orbitofrontal cortex. Human Brain
Mapping, 33, 753–762. https://doi.org/10.1002/hbm.21251,
PubMed: 21425397
Greene, N. R., & Naveh-Benjamin, M. (2020). A specificity
principle of memory: Evidence from aging and associative
memory. Psychological Science, 31, 316–331. https://doi.org
/10.1177/0956797620901760, PubMed: 32074021
Habak, C., Wilkinson, F., & Wilson, H. R. (2008). Aging disrupts
the neural transformations that link facial identity across
views. Vision Research, 48, 9–15. https://doi.org/10.1016/j
.visres.2007.10.007, PubMed: 18054981
Harry, B., Williams, M. A., Davis, C., & Kim, J. (2013). Emotional
expressions evoke a differential response in the fusiform face
area. Frontiers in Human Neuroscience, 7, 692. https://doi
.org/10.3389/fnhum.2013.00692, PubMed: 24194707
Haxby, J. V., Gobbini, M. I., Furey, M. L., Ishai, A., Schouten, J. L., &
Pietrini, P. (2001). Distributed and overlapping representations
of faces and objects in ventral temporal cortex. Science, 293,
2425–2430. https://doi.org/10.1126/science.1063736, PubMed:
11577229
Haynes, J. D. (2015). A primer on pattern-based approaches
to fMRI: Principles, pitfalls, and perspectives. Neuron, 87,
257–270. https://doi.org/10.1016/j.neuron.2015.05.025,
PubMed: 26182413
Hayward, D. A., Pereira, E. J., Otto, A. R., & Ristic, J. (2018).
Smile! Social reward drives attention. Journal of Experimental
Psychology: Human Perception and Performance, 44,
206–214. https://doi.org/10.1037/xhp0000459, PubMed:
28795836
Heberlein, A. S., Padon, A. A., Gillihan, S. J., Farah, M. J., &
Fellows, L. K. (2008). Ventromedial frontal lobe plays a critical
role in facial emotion recognition. Journal of Cognitive
Neuroscience, 20, 721–733. https://doi.org/10.1162/jocn.2008
.20049, PubMed: 18052791
deficit in normal aging. Journal of Neuroscience, 31,
15768–15774. https://doi.org/10.1523/JNEUROSCI.3209-11
.2011, PubMed: 22049420
Kanwisher, N., McDermott, J., & Chun, M. M. (1997). The
fusiform face area: A module in human extrastriate cortex
specialized for face perception. Journal of Neuroscience, 17,
4302–4311. https://doi.org/10.1523/JNEUROSCI.17-11-04302
.1997, PubMed: 9151747
Katsumi, Y., Andreano, J. M., Barrett, L. F., Dickerson, B. C., &
Touroutoglou, A. (2021). Greater neural differentiation in the
ventral visual cortex is associated with youthful memory in
superaging. Cerebral Cortex, 31, 5275–5287. https://doi.org
/10.1093/cercor/bhab157, PubMed: 34190976
Keightley, M. L., Chiew, K. S., Anderson, J. A., & Grady, C. L.
(2011). Neural correlates of recognition memory for
emotional faces and scenes. Social Cognitive and Affective
Neuroscience, 6, 24–37. https://doi.org/10.1093/scan/nsq003,
PubMed: 20194514
Koen, J. D., & Rugg, M. D. (2019). Neural dedifferentiation in
the aging brain. Trends in Cognitive Sciences, 23, 547–559.
https://doi.org/10.1016/j.tics.2019.04.012, PubMed: 31174975
LaBar, K. S., Crupain, M. J., Voyvodic, J. T., & McCarthy, G.
(2003). Dynamic perception of facial affect and identity in the
human brain. Cerebral Cortex, 13, 1023–1033. https://doi.org
/10.1093/cercor/13.10.1023, PubMed: 12967919
Lamar, M., & Resnick, S. M. (2004). Aging and prefrontal
functions: Dissociating orbitofrontal and dorsolateral abilities.
Neurobiology of Aging, 25, 553–558. https://doi.org/10.1016/j
.neurobiolaging.2003.06.005, PubMed: 15013577
Lee, Y., Grady, C. L., Habak, C., Wilson, H. R., & Moscovitch, M.
(2011). Face processing changes in normal aging revealed by
fMRI adaptation. Journal of Cognitive Neuroscience, 23,
3433–3447. https://doi.org/10.1162/jocn_a_00026, PubMed:
21452937
Leigland, L. A., Schulz, L. E., & Janowsky, J. S. (2004). Age
related changes in emotional memory. Neurobiology of
Aging, 25, 1117–1124. https://doi.org/10.1016/j
.neurobiolaging.2003.10.015, PubMed: 15212836
Hill, P. F., King, D. R., & Rugg, M. D. (2021). Age differences
Leshikar, E. D., Gutchess, A. H., Hebrank, A. C., Sutton, B. P., &
in retrieval-related reinstatement reflect age-related
dedifferentiation at encoding. Cerebral Cortex, 31, 106–122.
https://doi.org/10.1093/cercor/bhaa210, PubMed: 32829396
Hornak, J., Bramham, J., Rolls, E. T., Morris, R. G., O’Doherty, J.,
Bullock, P. R., et al. (2003). Changes in emotion after
circumscribed surgical lesions of the orbitofrontal and
cingulate cortices. Brain, 126, 1691–1712. https://doi.org/10
.1093/brain/awg168, PubMed: 12805109
Hornak, J., Rolls, E. T., & Wade, D. (1996). Face and voice
expression identification in patients with emotional and
behavioural changes following ventral frontal lobe damage.
Neuropsychologia, 34, 247–261. https://doi.org/10.1016/0028
-3932(95)00106-9, PubMed: 8657356
Huan, S. Y., Liu, K. P., Lei, X., & Yu, J. (2020). Age-related
emotional bias in associative memory consolidation: The role
of sleep. Neurobiology of Learning and Memory, 171,
107204. https://doi.org/10.1016/j.nlm.2020.107204, PubMed:
32145405
Ishai, A., Schmidt, C. F., & Boesiger, P. (2005). Face perception
is mediated by a distributed cortical network. Brain Research
Bulletin, 67, 87–93. https://doi.org/10.1016/j.brainresbull
.2005.05.027, PubMed: 16140166
James, L. E., Fogler, K. A., & Tauber, S. K. (2008). Recognition
memory measures yield disproportionate effects of aging
on learning face–name associations. Psychology and Aging,
23, 657–664. https://doi.org/10.1037/a0013008, PubMed:
18808254
Kalkstein, J., Checksfield, K., Bollinger, J., & Gazzaley, A. (2011).
Diminished top–down control underlies a visual imagery
Park, D. C. (2010). The impact of increased relational
encoding demands on frontal and hippocampal function in
older adults. Cortex, 46, 507–521. https://doi.org/10.1016/j
.cortex.2009.07.011, PubMed: 19709652
Lindenberger, U., & Baltes, P. B. (1994). Sensory functioning
and intelligence in old age: A strong connection. Psychology
and Aging, 9, 339–355. https://doi.org/10.1037//0882-7974.9.3
.339, PubMed: 7999320
Liu, Z. X., Grady, C., & Moscovitch, M. (2018). The effect of
prior knowledge on post-encoding brain connectivity and its
relation to subsequent memory. Neuroimage, 167, 211–223.
https://doi.org/10.1016/j.neuroimage.2017.11.032, PubMed:
29158201
Mather, M., & Carstensen, L. L. (2005). Aging and motivated
cognition: The positivity effect in attention and memory.
Trends in Cognitive Sciences, 9, 496–502. https://doi.org/10
.1016/j.tics.2005.08.005, PubMed: 16154382
Matsuda, Y. T., Fujimura, T., Katahira, K., Okada, M., Ueno, K.,
Cheng, K., et al. (2013). The implicit processing of categorical
and dimensional strategies: An fMRI study of facial emotion
perception. Frontiers in Human Neuroscience, 7, 551. https://
doi.org/10.3389/fnhum.2013.00551, PubMed: 24133426
McLaren, D. G., Ries, M. L., Xu, G., & Johnson, S. C. (2012). A
generalized form of context-dependent psychophysiological
interactions (gPPI): A comparison to standard approaches.
Neuroimage, 61, 1277–1286. https://doi.org/10.1016/j
.neuroimage.2012.03.068, PubMed: 22484411
Murphy, J., Millgate, E., Geary, H., Catmur, C., & Bird, G. (2019).
No effect of age on emotion recognition after accounting for
1202
Journal of Cognitive Neuroscience
Volume 34, Number 7
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
/
j
/
o
c
n
a
r
t
i
c
e
-
p
d
l
f
/
/
/
3
4
7
1
1
8
3
2
0
3
4
4
7
2
/
/
j
o
c
n
_
a
_
0
1
8
5
1
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
cognitive factors and depression. Quarterly Journal of
Experimental Psychology, 72, 2690–2704. https://doi.org/10
.1177/1747021819859514, PubMed: 31184269
Nasreddine, Z. S., Phillips, N. A., Bedirian, V., Charbonneau, S.,
Whitehead, V., Collin, I., et al. (2005). The Montreal Cognitive
Assessment, MoCA: A brief screening tool for mild cognitive
impairment. Journal of the American Geriatrics Society, 53,
695–699. https://doi.org/10.1111/j.1532-5415.2005.53221.x,
PubMed: 15817019
Naveh-Benjamin, M. (2000). Adult age differences in memory
performance: Tests of an associative deficit hypothesis.
Journal of Experimental Psychology: Learning, Memory,
and Cognition, 26, 1170–1187. https://doi.org/10.1037//0278
-7393.26.5.1170, PubMed: 11009251
Naveh-Benjamin, M., Guez, J., Kilb, A., & Reedy, S. (2004). The
associative memory deficit of older adults: Further support
using face–name associations. Psychology and Aging,
19, 541–546. https://doi.org/10.1037/0882-7974.19.3.541,
PubMed: 15383004
Ness, H. T., Folvik, L., Sneve, M. H., Vidal-Pineiro, D., Raud, L.,
Geier, O. M., et al. (2022). Reduced hippocampal–striatal
interactions during formation of durable episodic memories
in aging. Cerebral Cortex, 32, 2358–2372. https://doi.org/10
.1093/cercor/bhab331, PubMed: 34581398
Nicholls, M. E., Thomas, N. A., Loetscher, T., & Grimshaw, G. M.
(2013). The Flinders Handedness survey (FLANDERS): A brief
measure of skilled hand preference. Cortex, 49, 2914–2926.
https://doi.org/10.1016/j.cortex.2013.02.002, PubMed:
23498655
O’Doherty, J. P. (2004). Reward representations and reward-
related learning in the human brain: Insights from
neuroimaging. Current Opinion in Neurobiology, 14,
769–776. https://doi.org/10.1016/j.conb.2004.10.016,
PubMed: 15582382
Okubo, M., Suzuki, H., & Nicholls, M. E. (2014). A Japanese
version of the FLANDERS handedness questionnaire.
Japanese Journal of Psychology, 85, 474–481. https://doi.org
/10.4992/jjpsy.85.13235, PubMed: 25639030
Paller, K. A., & Wagner, A. D. (2002). Observing the
transformation of experience into memory. Trends in
Cognitive Sciences, 6, 93–102. https://doi.org/10.1016/s1364
-6613(00)01845-3, PubMed: 15866193
Park, J., Carp, J., Hebrank, A., Park, D. C., & Polk, T. A. (2010).
Neural specificity predicts fluid processing ability in older
adults. Journal of Neuroscience, 30, 9253–9259. https://doi
.org/10.1523/JNEUROSCI.0853-10.2010, PubMed: 20610760
Park, D. C., Polk, T. A., Park, R., Minear, M., Savage, A., & Smith,
M. R. (2004). Aging reduces neural specialization in ventral
visual cortex. Proceedings of the National Academy of
Sciences, U.S.A., 101, 13091–13095. https://doi.org/10.1073
/pnas.0405148101, PubMed: 15322270
Payer, D., Marshuetz, C., Sutton, B., Hebrank, A., Welsh, R. C., &
Park, D. C. (2006). Decreased neural specialization in old
adults on a working memory task. NeuroReport, 17, 487–491.
https://doi.org/10.1097/01.wnr.0000209005.40481.31,
PubMed: 16543812
Puce, A., Allison, T., Bentin, S., Gore, J. C., & McCarthy, G.
(1998). Temporal cortex activation in humans viewing eye
and mouth movements. Journal of Neuroscience, 18,
2188–2199. https://doi.org/10.1523/JNEUROSCI.18-06-02188
.1998, PubMed: 9482803
Radloff, L. S. (1977). The CES-D scale: A self-report depression
scale for research in the general population. Applied
Psychological Measurement, 1, 385–401. https://doi.org/10
.1177/014662167700100306
Riediger, M., Voelkle, M. C., Ebner, N. C., & Lindenberger, U.
(2011). Beyond “happy, angry, or sad?”: Age-of-poser and
age-of-rater effects on multi-dimensional emotion
perception. Cognition and Emotion, 25, 968–982. https://doi
.org/10.1080/02699931.2010.540812, PubMed: 21432636
Rissman, J., Gazzaley, A., & D’Esposito, M. (2004). Measuring
functional connectivity during distinct stages of a cognitive
task. Neuroimage, 23, 752–763. https://doi.org/10.1016/j
.neuroimage.2004.06.035, PubMed: 15488425
Ruffman, T., Henry, J. D., Livingstone, V., & Phillips, L. H.
(2008). A meta-analytic review of emotion recognition and
aging: Implications for neuropsychological models of aging.
Neuroscience and Biobehavioral Reviews, 32, 863–881. https://
doi.org/10.1016/j.neubiorev.2008.01.001, PubMed: 18276008
Said, C. P., Moore, C. D., Engell, A. D., Todorov, A., & Haxby, J. V.
(2010). Distributed representations of dynamic facial
expressions in the superior temporal sulcus. Journal of Vision,
10, 11. https://doi.org/10.1167/10.5.11, PubMed: 20616141
Sala-Llonch, R., Bartrés-Faz, D., & Junqué, C. (2015).
Reorganization of brain networks in aging: A review of
functional connectivity studies. Frontiers in Psychology, 6,
663. https://doi.org/10.3389/fpsyg.2015.00663, PubMed:
26052298
Salat, D. H., Greve, D. N., Pacheco, J. L., Quinn, B. T., Helmer,
K. G., Buckner, R. L., et al. (2009). Regional white matter
volume differences in nondemented aging and Alzheimer’s
disease. Neuroimage, 44, 1247–1258. https://doi.org/10.1016
/j.neuroimage.2008.10.030, PubMed: 19027860
Sato, W., Kochiyama, T., Yoshikawa, S., Naito, E., & Matsumura,
M. (2004). Enhanced neural activity in response to dynamic
facial expressions of emotion: An fMRI study. Cognitive Brain
Research, 20, 81–91. https://doi.org/10.1016/j.cogbrainres
.2004.01.008, PubMed: 15130592
Sato, W., Kubota, Y., Okada, T., Murai, T., Yoshikawa, S., &
Sengoku, A. (2002). Seeing happy emotion in fearful and
angry faces: Qualitative analysis of facial expression
recognition in a bilateral amygdala-damaged patient. Cortex,
38, 727–742. https://doi.org/10.1016/s0010-9452(08)70040-6,
PubMed: 12507042
Saverino, C., Fatima, Z., Sarraf, S., Oder, A., Strother, S. C., &
Grady, C. L. (2016). The associative memory deficit in aging is
related to reduced selectivity of brain activity during encoding.
Journal of Cognitive Neuroscience, 28, 1331–1344. https://doi
.org/10.1162/jocn_a_00970, PubMed: 27082043
Schrouff, J., Rosa, M. J., Rondina, J. M., Marquand, A. F., Chu, C.,
Ashburner, J., et al. (2013). PRoNTo: Pattern recognition for
neuroimaging toolbox. Neuroinformatics, 11, 319–337. https://
doi.org/10.1007/s12021-013-9178-1, PubMed: 23417655
Sergerie, K., Lepage, M., & Armony, J. L. (2005). A face to
remember: Emotional expression modulates prefrontal
activity during memory formation. Neuroimage, 24, 580–585.
https://doi.org/10.1016/j.neuroimage.2004.08.051, PubMed:
15627601
Shen, J., Kassir, M. A., Wu, J., Zhang, Q., Zhou, S., Xuan, S. Y., et al.
(2013). MR volumetric study of piriform-cortical amygdala and
orbitofrontal cortices: The aging effect. PLoS One, 8, e74526.
https://doi.org/10.1371/journal.pone.0074526, PubMed:
24069317
Shigemune, Y., Abe, N., Suzuki, M., Ueno, A., Mori, E., Tashiro,
M., et al. (2010). Effects of emotion and reward motivation on
neural correlates of episodic memory encoding: A PET study.
Neuroscience Research, 67, 72–79. https://doi.org/10.1016/j
.neures.2010.01.003, PubMed: 20079775
Shima, S. (1985). New self-rating scale for depression. Seisin-
Igaku, 27, 717–723.
Shimamura, A. P., Ross, J. G., & Bennett, H. D. (2006). Memory
for facial expressions: The power of a smile. Psychonomic
Bulletin and Review, 13, 217–222. https://doi.org/10.3758
/bf03193833, PubMed: 16892984
Skerry, A. E., & Saxe, R. (2014). A common neural code for
perceived and inferred emotion. Journal of Neuroscience,
Izumika, Cabeza, and Tsukiura
1203
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
/
j
/
o
c
n
a
r
t
i
c
e
-
p
d
l
f
/
/
/
3
4
7
1
1
8
3
2
0
3
4
4
7
2
/
/
j
o
c
n
_
a
_
0
1
8
5
1
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
34, 15997–16008. https://doi.org/10.1523/JNEUROSCI.1676
-14.2014, PubMed: 25429141
Smith, M. L., Gruhn, D., Bevitt, A., Ellis, M., Ciripan, O.,
Scrimgeour, S., et al. (2018). Transmitting and decoding
facial expressions of emotion during healthy aging: More
similarities than differences. Journal of Vision, 18, 10. https://
doi.org/10.1167/18.9.10, PubMed: 30208429
Sormaz, M., Watson, D. M., Smith, W. A. P., Young, A. W., &
Andrews, T. J. (2016). Modelling the perceptual similarity of
facial expressions from image statistics and neural responses.
Neuroimage, 129, 64–71. https://doi.org/10.1016/j
.neuroimage.2016.01.041, PubMed: 26825440
Sperling, R., Chua, E., Cocchiarella, A., Rand-Giovannetti, E.,
Poldrack, R., Schacter, D. L., et al. (2003). Putting names to
faces: Successful encoding of associative memories activates
the anterior hippocampal formation. Neuroimage, 20,
1400–1410. https://doi.org/10.1016/S1053-8119(03)00391-4,
PubMed: 14568509
St Jacques, P. L., Dolcos, F., & Cabeza, R. (2009). Effects of aging
on functional connectivity of the amygdala for subsequent
memory of negative pictures: A network analysis of functional
magnetic resonance imaging data. Psychological Science,
20, 74–84. https://doi.org/10.1111/j.1467-9280.2008.02258.x,
PubMed: 19152542
St-Laurent, M., Abdi, H., Burianova, H., & Grady, C. L. (2011).
Influence of aging on the neural correlates of
autobiographical, episodic, and semantic memory retrieval.
Journal of Cognitive Neuroscience, 23, 4150–4163. https://
doi.org/10.1162/jocn_a_00079, PubMed: 21671743
Sugimoto, H., Dolcos, F., & Tsukiura, T. (2021). Memory of
my victory and your defeat: Contributions of reward- and
memory-related regions to the encoding of winning events in
competitions with others. Neuropsychologia, 152, 107733.
https://doi.org/10.1016/j.neuropsychologia.2020.107733,
PubMed: 33347912
Summerfield, C., Greene, M., Wager, T., Egner, T., Hirsch, J., &
Mangels, J. (2006). Neocortical connectivity during episodic
memory formation. PLoS Biology, 4, e128. https://doi.org/10
.1371/journal.pbio.0040128, PubMed: 16605307
Tisserand, D. J., Pruessner, J. C., Sanz Arigita, E. J., van Boxtel,
M. P., Evans, A. C., Jolles, J., et al. (2002). Regional frontal
cortical volumes decrease differentially in aging: An MRI
study to compare volumetric approaches and voxel-based
morphometry. Neuroimage, 17, 657–669. https://doi.org/10
.1006/nimg.2002.1173, PubMed: 12377141
Tsukiura, T., & Cabeza, R. (2008). Orbitofrontal and
hippocampal contributions to memory for face–name
associations: The rewarding power of a smile.
Neuropsychologia, 46, 2310–2319. https://doi.org/10.1016/j
.neuropsychologia.2008.03.013, PubMed: 18455740
Tsukiura, T., & Cabeza, R. (2011). Remembering beauty: Roles
of orbitofrontal and hippocampal regions in successful
memory encoding of attractive faces. Neuroimage, 54,
653–660. https://doi.org/10.1016/j.neuroimage.2010.07.046,
PubMed: 20659568
Tsukiura, T., Sekiguchi, A., Yomogida, Y., Nakagawa, S.,
Shigemune, Y., Kambara, T., et al. (2011). Effects of aging on
hippocampal and anterior temporal activations during
successful retrieval of memory for face–name associations.
Journal of Cognitive Neuroscience, 23, 200–213. https://doi
.org/10.1162/jocn.2010.21476, PubMed: 20350057
Tzourio-Mazoyer, N., Landeau, B., Papathanassiou, D., Crivello,
F., Etard, O., Delcroix, N., et al. (2002). Automated
anatomical labeling of activations in SPM using a macroscopic
anatomical parcellation of the MNI MRI single-subject brain.
Neuroimage, 15, 273–289. https://doi.org/10.1006/nimg.2001
.0978, PubMed: 11771995
van Reekum, C. M., Schaefer, S. M., Lapate, R. C., Norris, C. J.,
Greischar, L. L., & Davidson, R. J. (2011). Aging is associated
with positive responding to neutral information but reduced
recovery from negative information. Social Cognitive and
Affective Neuroscience, 6, 177–185. https://doi.org/10.1093
/scan/nsq031, PubMed: 20385664
Watson, K. K., & Platt, M. L. (2012). Social signals in primate
orbitofrontal cortex. Current Biology, 22, 2268–2273. https://
doi.org/10.1016/j.cub.2012.10.016, PubMed: 23122847
Wegrzyn, M., Riehle, M., Labudda, K., Woermann, F.,
Baumgartner, F., Pollmann, S., et al. (2015). Investigating the
brain basis of facial expression perception using multi-voxel
pattern analysis. Cortex, 69, 131–140. https://doi.org/10.1016
/j.cortex.2015.05.003, PubMed: 26046623
Willis, M. L., Palermo, R., Burke, D., McGrillen, K., & Miller, L.
(2010). Orbitofrontal cortex lesions result in abnormal social
judgements to emotional faces. Neuropsychologia, 48,
2182–2187. https://doi.org/10.1016/j.neuropsychologia.2010
.04.010, PubMed: 20399220
Winston, J. S., O’Doherty, J., & Dolan, R. J. (2003). Common
and distinct neural responses during direct and incidental
processing of multiple facial emotions. Neuroimage, 20,
84–97. https://doi.org/10.1016/s1053-8119(03)00303-3,
PubMed: 14527572
Xie, Y., Ksander, J., Gutchess, A., Hadjikhani, N., Ward, N.,
Boshyan, J., et al. (2021). Age differences in neural activation
to face trustworthiness: Voxel pattern and activation level
assessments. Cognitive, Affective and Behavioral
Neuroscience, 21, 278–291. https://doi.org/10.3758/s13415
-021-00868-y, PubMed: 33751423
Yang, T. T., Menon, V., Eliez, S., Blasey, C., White, C. D., Reid, A. J.,
et al. (2002). Amygdalar activation associated with positive
and negative facial expressions. NeuroReport, 13, 1737–1741.
https://doi.org/10.1097/00001756-200210070-00009, PubMed:
12395114
Yang, A. X., & Urminsky, O. (2018). The smile-seeking
hypothesis: How immediate affective reactions motivate and
reward gift giving. Psychological Science, 29, 1221–1233.
https://doi.org/10.1177/0956797618761373, PubMed:
29920154
Zebrowitz, L. A., Boshyan, J., Ward, N., Gutchess, A., &
Hadjikhani, N. (2017). The older adult positivity effect in
evaluations of trustworthiness: Emotion regulation or
cognitive capacity? PLoS One, 12, e0169823. https://doi.org/10
.1371/journal.pone.0169823, PubMed: 28060919
Zelinski, E. M., Gilewski, M. J., & Thompson, L. W. (1980). Do
laboratory tests relate to self-assessment of memory ability in
the young and old. New Directions in Memory and Aging,
519–544.
Zhang, H., Japee, S., Nolan, R., Chu, C., Liu, N., & Ungerleider,
L. G. (2016). Face-selective regions differ in their ability to
classify facial expressions. Neuroimage, 130, 77–90. https://
doi.org/10.1016/j.neuroimage.2016.01.045, PubMed:
26826513
Zhao, K., Liu, M., Gu, J., Mo, F., Fu, X., & Hong Liu, C. (2020).
The preponderant role of fusiform face area for the facial
expression confusion effect: An MEG study. Neuroscience,
433, 42–52. https://doi.org/10.1016/j.neuroscience.2020.03
.001, PubMed: 32169552
1204
Journal of Cognitive Neuroscience
Volume 34, Number 7
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
/
j
/
o
c
n
a
r
t
i
c
e
-
p
d
l
f
/
/
/
3
4
7
1
1
8
3
2
0
3
4
4
7
2
/
/
j
o
c
n
_
a
_
0
1
8
5
1
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3