Modulation of Neural Oscillatory Activity

Modulation of Neural Oscillatory Activity
during Dynamic Face Processing

Elaine Foley, Gina Rippon, and Carl Senior

Abstrait

■ Various neuroimaging and neurophysiological methods have
been used to examine neural activation patterns in response to
faces. Cependant, much of previous research has relied on static
images of faces, which do not allow a complete description of
the temporal structure of face-specific neural activities to be
made. Plus récemment, insights are emerging from fMRI studies
about the neural substrates that underpin our perception of nat-
uralistic dynamic face stimuli, but the temporal and spectral oscil-
latory activity associated with processing dynamic faces has yet to
be fully characterized. Ici, we used MEG and beamformer
source localization to examine the spatiotemporal profile of neu-
rophysiological oscillatory activity in response to dynamic faces.

Source analysis revealed a number of regions showing enhanced
activation in response to dynamic relative to static faces in the dis-
tributed face network, which were spatially coincident with re-
gions that were previously identified with fMRI. En outre,
our results demonstrate that perception of realistic dynamic facial
stimuli activates a distributed neural network at varying time
points facilitated by modulations in low-frequency power within
alpha and beta frequency ranges (8–30 Hz). Naturalistic dynamic
face stimuli may provide a better means of representing the com-
plex nature of perceiving facial expressions in the real world, et
neural oscillatory activity can provide additional insights into the
associated neural processes. ■

INTRODUCTION

The ability to recognize faces and process expressed in-
tentions and emotions is at the center of human social
perception skills. Various neuroimaging and neurophysi-
ological methods have been used to examine neural ac-
tivation patterns in response to a diverse range of face
processing tasks, commonly using static images of faces.
These methods have informed the development of valu-
able detailed cortical network models of face perception
(Ishai, 2008; Haxby, Hoffman, & Gobbini, 2002), et le
significant contribution of these techniques to our under-
standing of face perception has been widely documented
elsewhere ( Vuilleumier & Pourtois, 2007; Calder &
Jeune, 2005; Haxby, Hoffman, & Gobbini, 2000). Comment-
jamais, it is important to consider that faces are inherently
dynamic rather than static and are most often seen mov-
ing in the real world. Encore, previous studies investigating
face processing have overwhelmingly relied on static im-
ages of faces. These posed static stimuli do not allow a
complete description of the temporal structure of face-
specific neural activities to be made (Adolphs, 2002b),
as they represent impoverished displays lacking natural
facial motion (Ambadar, Schooler, & Cohn, 2005).
Dynamic face stimuli therefore offer a more suitable
means of examining the neural basis of realistic natural
face perception.

Aston University, Birmingham, ROYAUME-UNI

© 2017 Massachusetts Institute of Technology

Au cours des dernières années, insights have emerged from neuroimag-
ing studies that have used fMRI to explore the neural sub-
strates involved in processing naturalistic dynamic face
stimuli. These studies have revealed differential patterns
of activation for dynamic and static faces (Foley, Rippon,
Thai, Longe, & Senior, 2012; Pitcher, Dilks, Saxe,
Triantafyllou, & Kanwisher, 2011; Fox, Iaria, & Barton,
2009; LaBar, Crupain, Voyvodic, & McCarthy, 2003). Dans
général, explicit movement information in dynamic face
stimuli has been shown to activate a richer and partly dif-
ferent, broader network of regions compared with static
stimuli (Foley et al., 2012; Fox et al., 2009; Schultz & Pilz,
2009; Trautmann, Fehr, & Herrmann, 2009). Dynamic
face stimuli have consistently been shown to elicit larger
responses relative to static faces in the STS (Foley et al.,
2012; Pitcher et al., 2011; Fox et al., 2009; Schultz & Pilz,
2009; Sato, Kochiyama, Yoshikawa, Naito, & Matsumura,
2004; Kilts, Egan, Gideon, Ely, & Hoffman, 2003; LaBar
et coll., 2003), middle temporal gyri (MTG), and inferior
frontal gyri, entre autres (Foley et al., 2012; Sato et al.,
2004). These studies have harnessed the powerful spatial
resolution of fMRI to localize the neural networks involved
in processing dynamic faces but have yet to fully character-
ize the temporal and spectral information within these
réseaux.

A handful of neurophysiological studies using EEG
and MEG have explored spatiotemporal processing of
dynamic face stimuli. Motion-sensitive ERPs have been
identified with EEG over bilateral occipito-temporal

Journal des neurosciences cognitives 30:3, pp. 338–352
est ce que je:10.1162/jocn_a_01209

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

e
d
toi

/
j

/

o
c
n
un
r
t
je
c
e

p
d

je

F
/

/

/

/

3
0
3
3
3
8
1
7
8
7
2
1
5

/
j

o
c
n
_
un
_
0
1
2
0
9
p
d

.

F

b
oui
g
toi
e
s
t

t

o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3

electrodes when participants viewed dynamic images of
the face, main, and body ( Wheaton, Pipingas, Silberstein,
& Puce, 2001). In a series of EEG studies, Puce, Forgeron,
and Allison (2000) found a prominent negativity between
170 et 220 msec (N170) in response to the apparent
motion of a natural face over posterior scalp electrodes
and to the movement of a line-drawn face (Puce &
Perrett, 2003). Plus récemment, Johnston, Molyneux, et
Jeune (2015) reported an increased N170 response to
free viewing of ambient video stimuli of faces, providing
support for the N170 response in a more ecologically valid
setting. A corresponding sensor-level MEG response, le
M170, has also been described to the apparent motion
of natural faces ( Watanabe, Kakigi, & Puce, 2001) et
more recently to facial avatars (Ulloa, Puce, Hugueville,
& Georges, 2014). De la même manière, Watanabe, Miki, and Kakigi
(2005) and Miki et al. (2004, 2007, 2011) found that the ob-
servation of different types of facial motion, y compris
mouth and eye movements, elicited evoked responses with
a peak latency of 160 msec in lateral occipito-temporal
cortex, corresponding to area MT (for a review, see Miki
& Kakigi, 2014).

At the source level, Sato, Kochiyama, Uono, et
Yoshikawa (2008) described an increased evoked re-
sponse to averted versus straight gazes, entre 150
et 200 msec, which peaked at 170 msec, in the STS.
Plus récemment, Sato, Kochiyama, and Uono (2015) trouvé
that posterior regions including visual area V5, the fusi-
form gyrus, and the STS showed increased evoked
activity in response to dynamic facial expressions relative
to dynamic mosaic control stimuli between 150 et
200 msec after stimulus onset. Taken together, ces
findings present a relatively consistent picture, par lequel
facial motion elicits increased evoked activity around
200 msec in occipito-temporal regions. These regions
form part of the so-called “core system” in Haxby
et al.’s (2000) model of face perception. According to this
model, the perception of identity, the invariant aspect of
a face, is believed to occur in the lateral fusiform gyrus at
environ 170 msec, whereas the STS is more in-
volved in processing the changeable facial features, tel
as eye and mouth movements, around the same time
(Pitcher et al., 2011; Haxby et al., 2000). Regions of the
so-called “extended system” such as the amygdala, OFC,
somatosensory cortex, and insular cortex (IC) may be
recruited from around 300 msec after stimulus onset,
to link the perceptual representation to conceptual
knowledge of the emotional and social meanings of
perceived expressions (Adolphs, 2002un).

Remarquablement, relative to the wealth of neurophysiologi-
cal studies that have examined the time course of evoked
responses to faces, very few have explored the ongoing
neural oscillatory activity associated with face processing,
particularly with respect to dynamic facial displays.
Evoked responses focus specifically on phase-locked neu-
ral responses to stimuli; cependant, this neural activity con-
stitutes only part of the total neural response to a

delivered stimulus (Donner & Siegel, 2011; Hillebrand,
Singh, Holliday, Furlong, & Barnes, 2005). It is increas-
ingly becoming clear that neural oscillations play an im-
portant role in brain function and the synchronized
activity of oscillating neural networks is now believed to
be the critical link between single-neuron activity and be-
havior (Buzsáki & Draguhn, 2004; Ange, Fries, & Chanteur,
2001). There is also a growing body of literature to sug-
gest that neural oscillations are involved in supporting in-
formation transfer and binding between brain regions
(Donner & Siegel, 2011; Ange & Fries, 2010; Buzsáki &
Draguhn, 2004). Plus récemment, face processing has been
linked to neural oscillatory activity in studies using vari-
ous paradigms involving predominantly static face stimuli
(Furl, Coppola, Averbeck, & Weinberger, 2014; Schyns,
Thut, & Gross, 2011). We therefore examined ongoing
neural oscillatory activity, or so-called induced oscilla-
tion, during dynamic face processing in this study to as-
sess additional aspects of face perception that may not be
evident in the modulation of the evoked response.

Unlike fMRI, the MEG signal provides a direct measure
of electrical neural activity and is therefore well suited to
the investigation of induced neural oscillatory responses,
particularly when combined with beamforming source lo-
calization techniques such as synthetic aperture magne-
tometry (SAM). Beamformer methods can be used to
examine temporal and spectral information at both short
and longer latencies, which is particularly useful when
exploring cognitive processes such as face perception
(Hillebrand et al., 2005). Notably, the output of the
beamformer can be described as a “virtual electrode
which may be visualized as time–frequency plots of activ-
ity arising from specific voxels where spectral power
changes are identified (Hillebrand et al., 2005; Singh,
Barnes, Hillebrand, Forde, & Williams, 2002). This per-
mits the detailed examination of temporal and spectral
power changes within specific ROIs, in this case, within
regions of the face perception network.

MEG and SAM source localization methods have been
used to investigate changes in induced cortical oscillatory
activity in response to viewing various dynamic biological
motion stimuli (Muthukumaraswamy, Johnson, Gaetz, &
Cheyne, 2006; Singh et al., 2002). Singh et al. (2002) concernant-
ported decreases in induced oscillatory power in 5- à 15-
et 15- to 25-Hz frequency bands in response to viewing
point light displays of biological motion in area MT and
the STS. De la même manière, Muthukumaraswamy et al. (2006)
found decreases in low-frequency power, in alpha (8–
15 Hz) and beta (15–35 Hz) bands, and in sources within
lateral sensorimotor areas and bilateral occipito-parietal
regions in response to viewing dynamic biological mo-
tion stimuli of orofacial movements. D'autre part,
Lee et al. (2010) found decreases in high-frequency
pouvoir (30–80 Hz) in bilateral STS in response to viewing
dynamic face stimuli portraying rigid motion, tel que
head turns that conveyed shifts in social attention. Inter-
estingly, they did not find decreases in low-frequency

Foley, Rippon, and Senior

339

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

e
d
toi

/
j

/

o
c
n
un
r
t
je
c
e

p
d

je

F
/

/

/

/

3
0
3
3
3
8
1
7
8
7
2
1
5

/
j

o
c
n
_
un
_
0
1
2
0
9
p
d

.

F

b
oui
g
toi
e
s
t

t

o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3

pouvoir (5–30 Hz) in the STS. They did, cependant, report
low-frequency power (5–30 Hz) decreases in the fusiform
gyrus, in response to both dynamic and static face stimuli,
which were greater in magnitude and spatial extent for the
dynamic face stimuli. The authors interpreted these
oscillatory power decreases in both low- and high-
frequency bands as representing increases in cortical acti-
vation, in both ventral and dorsal streams during passive
viewing of face stimuli.

A more recent MEG study by Jabbi et al. (2015) concernant-
ported changes in sustained MEG beta-band (14–30 Hz)
oscillatory activity during observation of dynamic faces,
specifically happy and fearful facial displays, relative to
their static counterparts in STS and frontolimbic cortices.
Cependant, they found that the modulation of oscillatory
activity in response to facial dynamics was specific to
the beta frequency band only, which they claim is consis-
tent with the role of beta-band activity in visual and social
information processing. They also report concordance
with fMRI data where they found convergence of sus-
tained BOLD signal and beta-band oscillatory responses
in the STS for dynamic face stimuli. De même, Singh
et autres. (2002) compared the results from their MEG and
SAM analysis with fMRI data acquired using the same par-
adigm and found that the fMRI BOLD response was in-
versely related to cortical synchronization within these
lower-frequency (5–15 and 15–25 Hz) bands.

By combining their MEG and fMRI data while using the
same paradigms, these studies provide a means to better
link neurophysiological oscillatory responses to anatomi-
cal networks (Hall, Robson, Morris, & Brookes, 2014).
De plus, there is a growing body of evidence suggest-
ing that increased neural BOLD activation is represented
by decreases in cortical oscillatory power in the low fre-
quency range including alpha and beta frequencies (Hall
et coll., 2014; Zumer, Brookes, Stevenson, Francis, &
Morris, 2010). The current study therefore aims to ex-
plore neural responses to dynamic faces in the alpha
and beta frequency ranges within the dynamic face per-
ception network using a similar paradigm to that em-
ployed in a previous fMRI study (Foley et al., 2012). Il
is still unclear how modulations in low-frequency (8–
30 Hz) oscillatory brain activity within the distributed face
perception network contribute to or reflect processing of
dynamic facial displays and how these differ between dy-
namic and static displays. Ainsi, the overall aim of this
study was to investigate the spatiotemporal and neural
oscillatory responses to realistic dynamic face stimuli
compared directly with their static counterparts. On the
basis of the existing literature, our primary hypothesis is
that dynamic face processing will be associated with
modulations in alpha and beta power (8–30 Hz) within
regions of the distributed face perception network
including occipital, temporal, and frontal regions.

Spécifiquement, we predict that changes in oscillatory
power within alpha and beta bands will be larger for
the dynamic face stimuli and will exhibit greater de-

creases in power relative to their static counterparts. Fur-
thermore, we expect that there will be overlap between
the source locations of oscillatory power decreases iden-
tified in the SAM source analysis for the contrast of dy-
namic and static faces and the anatomical sources
identified as showing increased BOLD responses for dy-
namic relative to static faces in our previous fMRI study
(Foley et al., 2012). We further predict that responses in
different regions of the distributed face perception net-
work will display different temporal patterns of activa-
tion; specifically, sources within occipital gyri will
display earlier peak responses for early visual analysis,
followed by later responses in temporal and frontal re-
gions, which are recruited later for cognitive and social
traitement (Adolphs, 2002un).

MÉTHODES

Participants

Fourteen healthy, self-reported, right-handed volunteers
(five men) with normal or corrected-to-normal vision
(âge moyen = 29.2 années, SD = 2.45 années) gave full writ-
ten informed consent to take part in the study, ce qui était
approved by the Aston University Human Science Ethics
Committee.

Experimental Design and Imaging Paradigm

The paradigm was similar to that used in our previous
étude (see Foley et al., 2012, for a full description of stim-
ulus creation and validation). The stimuli consisted of
various dynamic and static angry, heureux, and speech fa-
cial displays. For both static and dynamic conditions, im-
ages appeared on screen for 2.5 sec. The dynamic angry
and happy video clips evolved over this period of 2.5 sec
from a closed-mouthed neutral expression to their re-
spective emotional expression, reaching peak expression
between approximately 1 et 1.5 sec. They then main-
tained peak expression for the remaining duration of
the video clip. In the dynamic condition, four different
identities were presented in each of the three different
display categories (c'est à dire., angry, heureux, speech) and like-
wise in the static condition. The identities were matched
across the dynamic and static conditions, as the static
stimuli were created from a screenshot of the final frame
of each of the dynamic excerpts.

A sample of 24 stimuli (12 dynamic and 12 correspond-
ing static images) were presented in an event-related de-
sign, and participants were instructed to maintain central
fixation throughout the experiment. Stimuli were
completely randomized and presented for 2.5 sec, avec
2.5 sec of baseline (fixation cross) presented before each
stimulus. Il y avait 240 trials in an experimental run.
Each trial consisted of a single stimulus presentation of
2.5 sec (active state) and an ISI fixation cross of 2.5 sec
(passive state). Participants performed a 1-back memory

340

Journal des neurosciences cognitives

Volume 30, Nombre 3

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

e
d
toi

/
j

/

o
c
n
un
r
t
je
c
e

p
d

je

F
/

/

/

/

3
0
3
3
3
8
1
7
8
7
2
1
5

/
j

o
c
n
_
un
_
0
1
2
0
9
p
d

.

F

b
oui
g
toi
e
s
t

t

o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3

Chiffre 1. MEG experimental design: 120 dynamic and 120 static images
were presented for 2.5 sec in a random sequence, alternating with a
2.5-sec fixation cross. Participants (n = 14) performed a 1-back memory
task and responded via button press as to whether the identity of the
current image matched the previous image.

task and responded with their dominant hand via the
lumina response pad as to whether the identity in the
current image matched that of the previous image. Ils
were asked to respond as soon as the stimulus appeared
on screen. This is the same task that was used in the fMRI
étude, and as described in Foley et al. (2012), it was de-
signed to maintain vigilance and to control for attention
(voir la figure 1).

MEG data were recorded using a 275-channel CTF sys-
tem using third-order gradiometer configuration with a
sampling rate of 600 Hz. Three electromagnetic localiza-
tion coils were attached to the participants’ head at the
nasion and bilateral preauricular points to localize their
head relative to the MEG sensors. Continuous head mo-
tion was monitored throughout the recording, and all
participants remained within the 1-cm range to reduce
contamination from motion artifacts. Participants were
seated in an upright position in the MEG scanner. Visual
stimuli were presented using Presentation (Neurobehav-
ioral Systems, Inc., Berkeley, Californie), and participants
viewed the computer monitor directly through a window
in the shielded room. A Polhemus Isotrak 3-D digitizer
was used to map the surface shape of each participant’s
head and localize the electromagnetic head coils with re-
spect to that surface. Each participant’s head shape file
was then extracted and coregistered to a high-resolution
T1-weighted anatomical MRI, which was acquired
for each participant on a 3-T Magnetom Trio Scanner
(Siemens, Erlangen, Allemagne) using an eight-channel
radio-frequency birdcage head coil. Coregistration was
performed using in-house software based on an algorithm
designed to minimize the squared Euclidean distance
between the Polhemus surface and the MRI surface. Ce
coregistration is accurate to within 5 mm (for further
details, see Adjamian et al., 2004).

Data Analysis

Data for each participant were edited and filtered to re-
move environmental artifacts, a 50-Hz powerline filter
was used, and DC offset was removed. Third-order gradi-
ent noise reduction was also used to remove environ-
mental noise from the data that was picked up by
reference coils during acquisition. The MEG data were

analyzed using SAM, which is a spatial filtering “beamfor-
mer” technique that can be used to generate SPMs of
stimulus or event-related changes in cortical oscillatory
pouvoir. A boxcar experimental design was used to assess
spectral power between active (dynamic faces) and pas-
sive (static faces) states in alpha (8–12 Hz) and beta (13–
30 Hz) frequency bands. The difference between the ac-
tive and passive spectral power estimates was assessed
for each voxel using a pseudo t statistic (Robinson &
Vrba, 1999). This produced a 3-D SAM image of cortical
activity for each participant under each condition. SAM
analysis was computed using 500-msec time windows
to assess the main effect of motion by directly comparing
power changes in low-frequency bands between dynamic
and static faces, starting from stimulus onset at 0 msec
(0–500, 500–1000, 1000–1500, 1500–2000, and 2000–
2500 msec). Rather than using a long time window of
2.5 sec to cover the length of stimulus display, ces
500-msec time windows were chosen to investigate the
temporal progression within the network at multiple
time points.

Each participant’s data were normalized and converted
to Talairach space using SPM (SPM99, www.fil.ion.ucl.ac.
uk/spm) for group level comparisons. Nonparametric
permutation analysis using Statistical non Parametric
Mapping toolbox (SnPM;www.fil.ion.ucl.ac.uk/spm) était
computed to assess significant group effects. The output
of the beamformer may be described as a “virtual elec-
trode,” which can be visualized as time–frequency plots
of activity arising from specific voxels where spectral
power changes are identified (Singh et al., 2002), et
these can be used to characterize frequency-specific
spectral power changes associated with the task in more
detail (Hillebrand et al., 2005). ROIs were determined
based on significant group effects identified with SnPM
( p < .05, corrected) as showing significant peaks of acti- vation within the face perception network, and locations were verified against our previous fMRI study (Foley et al., 2012). These included bilateral inferior occipital gyri (IOG), MTG, and superior temporal gyri. To provide a more detailed estimation of the time–frequency charac- teristics of the signal within each ROI, virtual electrodes were constructed at the individual level at the sites of peak activation that fell within 1-cm radius of the group peak identified with SnPM. The virtual electrodes were based on a covariance matrix constructed using a 5-sec window from 2.5 sec before stim- ulus onset to 2.5 sec after stimulus onset, with a bandwidth of 1–30 Hz. Time windows for baseline estimation were of equal duration to the time window of interest to achieve balanced covariance estimation. For visualization purposes, time–frequency wavelet plots were computed in MATLAB R2008b (The MathWorks, Natick, MA) on the virtual elec- trodes for a window beginning at 0–2.5 sec after stimulus onset. Percent power change from baseline (the 1 sec pre- ceding stimulus onset) was computed at each frequency for both dynamic and static stimuli to give mean (across Foley, Rippon, and Senior 341 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / j / o c n a r t i c e - p d l f / / / / 3 0 3 3 3 8 1 7 8 7 2 1 5 / j o c n _ a _ 0 1 2 0 9 p d . f b y g u e s t t o n 0 7 S e p e m b e r 2 0 2 3 epochs and participants) power increases and decreases for dynamic and static face stimuli separately for each participant. Dynamic and static face stimuli were then directly contrasted at each ROI from 0 to 2.5 sec after stim- ulus onset, across the group of participants, using a boot- strapping technique (Fisher et al., 2008; Singh et al., 2002). Only those changes that were significant at p < .05 are re- ported to correspond with the contrast performed at the whole-head group level. An important aspect of this study is the direct compar- ison of two “active” states in the SAM analysis, rather than “active” versus “passive,” as used in previous MEG and SAM studies (Lee et al., 2010; Muthukumaraswamy et al., 2006; Singh et al., 2002), hence in this study, dy- namic faces were directly compared with static faces, not with baseline. This methodology was employed to use a more robust control for the dynamic face stimuli (Kaiser, Rahm, & Lutzenberger, 2008) and to maintain consistency with our earlier fMRI study (Foley et al., 2012). However, to correctly interpret the results from the direct comparison of the dynamic versus static face conditions, the baseline comparisons must also be com- puted. This is because an overall decrease in the dynamic versus static face comparison may be driven by either a decrease in power for dynamic faces or an increase in power for static faces, hence time–frequency plots for the direct comparison along with the two baseline condi- tions were computed. SAM was computed across five different 500-msec time windows to identify sources of differential activity be- tween dynamic and static face stimuli across the length of stimulus presentation in alpha and beta frequency bands. Group analysis was performed using SnPM to identify significantly clustered peaks across the group of participants in response to dynamic versus static face stimuli. As predicted, group SnPM analysis identified sig- nificant decreases in power in alpha and beta bands for dynamic relative to static faces, in regions within the dis- tributed face perception network across the group of 14 participants (see Table 1 and Figure 2). The first time window analyzed was from 0 to 500 msec after stimulus onset, and this revealed an early decrease in low-frequency power in bilateral IOG in response to dynamic relative to static faces. The later time window of 500–1000 msec again revealed decreases in low-fre- quency power in bilateral IOG, along with bilateral MTG, right STS, and left inferior frontal gyrus. Within the time window from 1000 to 1500 msec, decreases in low-frequency power were seen in bilateral IOG, right MTG, right STS, right lingual gyrus, right inferior frontal gyrus, and left insula. Notably, responses are more right lateralized during this period, and responses in the Table 1. Brain Regions for Group SAM Analysis Showing Decreases in Alpha and Beta Power (8–30 Hz) for Dynamic Compared with Static Faces within Time Windows Pseudo t Value x, y, z (A) 0–500 msec L IOG (BA 17) R IOG (BA 18) (B) 500–1000 msec L MTG (BA 37) R MTG (BA 39) L IOG (BA 18) R IOG (BA 18) R STS (BA 22) (C) 1000–1500 msec L IOG (BA 18) R MTG (BA 37) R STS (BA 22) R IOG (BA 17) L STS (BA 13) R inferior frontal gyrus (BA 46) L insula (BA 13) R MTG (BA 19) R STS (BA 22) R IOG (BA 17) L STS (BA 13) R middle occipital gyrus (BA 18) R inferior frontal gyrus (BA 47) R inferior frontal gyrus (BA 10) (E) 2000–2500 msec R middle occipital gyrus (BA 18) R STS (BA 22) L precentral gyrus (BA 6) L MTG (BA 37) R STS (BA 38) R middle frontal gyrus (BA 10) −3.6 −3.24 −12, −93, −18 18, −82, −11 −4.35 −3.83 −3.54 −3.47 −2.8 −6.03 −5.97 −5.35 −5.23 −4.83 −4.00 −2.93 −4.96 −4.50 −4.10 −3.74 −3.32 −2.85 −2.33 −4.34 −3.15 −3.11 −2.71 −2.43 −2.38 −42, −63, 0 48, −66, 15 −27, −99, −18 27, −81, −9 54, −15, 0 −30, −81, −3 42, −63, 9 54, −15, 0 12, −87, −6 −51, −39, 18 36, 30, 21 −36, 21, 12 45, −63, 12 57, −9, 3 12, −87, −6 −51, −39, 18 15, −99, 18 57, 27, −6 42, 39, 12 30, −84, −3 60, −15, 0 −9, −24, 68 −45, −45, −60 48, 21, −30 39, 45, 21 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / j / o c n a r t i c e - p d l f / / / / 3 0 3 3 3 8 1 7 8 7 2 1 5 / j o c n _ a _ 0 1 2 0 9 p d . f b y g u e s t t o n 0 7 S e p e m b e r 2 0 2 3 Coordinates indicate local maxima in Talairach space. Clusters are significant at p < .05. L = left; R = right. RESULTS Group Source Analysis Results (D) 1500–2000 msec 342 Journal of Cognitive Neuroscience Volume 30, Number 3 lingual gyrus, MTG, and STS show higher levels of activa- tion as indexed by the higher t values. Interestingly, right STS shows a sustained response from approximately 500 to 1500 msec. From 1500 to 2000 msec, low-frequency power de- creases were localized to the right MTG, bilateral STS, right IOG, right middle occipital gyrus, and right inferior frontal gyrus. Again, these responses appear to be more right lateralized. Finally, from 2000 to 2500 msec, low- frequency power decreases were found in the right middle occipital gyrus, right STS, left MTG, left precentral gyrus, right middle frontal gyrus, and left inferior frontal gyrus (see Table 1 and Figure 2). Group Time–Frequency Results Virtual electrodes were constructed to map the time– frequency characteristics of the ROIs within the dynamic face perception network (see Table 2 for coordinates of the virtual electrodes). The specific ROIs selected were bilateral IOG, bilateral MTG, and bilateral STS, as these Figure 2. Group SAM image (n = 14) showing decreases in alpha and beta power for dynamic versus static faces within five different time windows: 0–500, 500–1000, 1000–1500, 1500–2000, and 2000–2500 msec after stimulus onset. The figure shows progression of activation within the face perception network over time. Activation is shown in bilateral IOG, MTG, superior temporal gyri, and inferior frontal gyri. Blue–purple–white color scale represents a decrease in signal power ( p < .05, corrected). Table 2. Mean Stereotactic Coordinates for the Virtual Electrodes in Talairach Space Region L IOG R IOG L MTG R MTG L STS R STS n 12 12 8 8 8 9 x, y, z −12, −93, −18 20. −81, −11 −45, −45, −60 42, −63, 9 −51, −39, 18 57, −9, 3 n = number of participants showing a significant peak in a particular region; L = left; R = right. were the most robust regions identified across the group of participants in the SAM source analysis, which also cor- responded to the regions within the core face perception network identified in our previous fMRI study (Foley et al., 2012). Virtual electrodes were constructed at the individual level only at sites of peak activation that fell within 1-cm radius of the group peak identified with SnPM (see Table 2). IOG Peak decreases in oscillatory power were identified in 12 of 14 participants in bilateral IOG (see Table 2). The vir- tual electrodes constructed in the region in the left IOG (see Figure 3) showed, at the group level, a sustained power decrease within 200 msec of stimulus onset for dy- namic faces relative to baseline in the 10- to 30-Hz fre- quency range. Static faces compared with baseline showed a slight power increase at approximately 80 msec between 20 and 30 Hz, followed by a power decrease be- tween 10 and 25 Hz, which was not quite as sustained as the decrease to dynamic faces. The direct comparison of dynamic and static faces showed an early power decrease within the first 200 msec between 8 and 30 Hz, followed by a slight increase and then a sustained decrease from approximately 700 msec onward. The time–frequency characteristics of the virtual elec- trodes constructed in the right IOG were extremely sim- ilar to those of the left IOG described above (see Figure 4). When dynamic faces were compared with baseline, a sustained power decrease within 200 msec of stimulus onset was found in the 10- to 20-Hz frequency range. Again, static faces compared with baseline showed a slight power increase around 80 msec between 10 and 30 Hz, followed by a power decrease around 200 msec between 10 and 30 Hz. When the response to dynamic and static faces was directly compared, an early power decrease within the first 200 msec between 10 and 30 Hz was found, followed by a later decrease around 800 msec in the 10- to 30-Hz range. Foley, Rippon, and Senior 343 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / j / o c n a r t i c e - p d l f / / / / 3 0 3 3 3 8 1 7 8 7 2 1 5 / j o c n _ a _ 0 1 2 0 9 p d . f b y g u e s t t o n 0 7 S e p e m b e r 2 0 2 3 MTG Peak decreases in oscillatory power were identified in 8 of 14 participants in bilateral MTG (see Table 2). The vir- tual electrodes constructed in the left MTG (see Figure 5) revealed, at the group level, a sustained decrease in power from approximately 200 msec in the 20- to 30-Hz frequency range for dynamic faces relative to baseline, followed by a stronger sustained decrease in power from approximately 500 msec between 10 and 15 Hz. Static faces compared with baseline also showed a power de- crease in the 20- to 30-Hz frequency range, but this was at a slightly later time of 400 msec and was not sustained. A similar decrease in power between 10 and 15 Hz was also revealed at 500 msec, but again, it was not as strong or sustained. When dynamic and static faces were directly compared, an early power decrease occurred at 200 msec between 12 and 25 Hz, followed by a decrease in power between 800 and 1500 msec in the 8- to 30-Hz frequency range. In the right MTG (see Figure 6), dynamic faces com- pared with baseline showed a sustained decrease in power from approximately 500 msec between 10 and 30 Hz. Static faces compared with baseline revealed a decrease in power between 400 and 800 msec in the 10- to 30-Hz range, followed by a later decrease in power from 1600 msec between 8 and 25 Hz. The direct con- trast of dynamic and static faces showed a power increase from 400 to 800 msec between 10 and 20 Hz, which was driven by the corresponding decrease in power at this time for the static faces. This was followed by a decrease in power between 900 and 2000 msec in the beta frequency range (12–30 Hz) due to the more sustained decrease in power for dynamic faces. STS Peak decreases in oscillatory power were identified in 8 of 14 participants in the left superior temporal gyrus (see Table 2). When virtual electrodes were constructed in the left STS (see Figure 7), the contrast of dynamic faces with baseline showed, at the group level, a sustained decrease in power from 600 msec onward around 12 Hz. The static faces elicited a power decrease between 200 and 600 msec in the 10- to 30-Hz range. The direct compari- son of dynamic and static faces showed a short power in- crease at ∼500 msec, followed by a stronger and more Figure 3. Group (n = 12) time–frequency findings in left IOG (LIOG) show a significant early decrease in low-frequency power ( p < .05). (A) Dynamic faces compared with baseline show a sustained power decrease within 200 msec of stimulus onset between 10 and 30 Hz. (B) Static faces compared with baseline show a power increase at 80 msec, followed by a power decrease between 10 and 25 Hz. (C) Direct comparison of dynamic and static faces shows an early power decrease within 200 msec between 8 and 30 Hz, followed by a sustained power decrease from 700 msec onward. Black box indicates time windows showing significant differences between dynamic and static conditions that were identified in the whole-head group SAM analysis. 344 Journal of Cognitive Neuroscience Volume 30, Number 3 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / j / o c n a r t i c e - p d l f / / / / 3 0 3 3 3 8 1 7 8 7 2 1 5 / j o c n _ a _ 0 1 2 0 9 p d . f b y g u e s t t o n 0 7 S e p e m b e r 2 0 2 3 Figure 4. Group (N = 12) time–frequency findings in right IOG (RIOG) show a significant early decrease in low-frequency power ( p < .05). (A) Dynamic faces compared with baseline show a sustained power decrease within 200 msec of stimulus onset between 10 and 30 Hz. (B) Static faces compared with baseline show a power increase at 80 msec, followed by a power decrease between 10 and 25 Hz. (C) Direct comparison of dynamic and static faces shows an early power decrease within 200 msec between 8 and 30 Hz. Black box indicates time windows showing significant differences between dynamic and static conditions that were identified in the whole-head group SAM analysis. l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / j / o c n a r t i c e - p d l f / / / / 3 0 3 3 3 8 1 7 8 7 2 1 5 / j o c n _ a _ 0 1 2 0 9 p d . f b y g u e s t t o n 0 7 S e p e m b e r 2 0 2 3 sustained decrease from 800 msec onward between 10 and 30 Hz due to the more sustained decrease in power in the dynamic condition. Peak decreases in oscillatory power were identified in 9 of 14 participants in the right superior temporal gyrus (see Table 2). The virtual electrodes constructed in the right STS (see Figure 8) revealed a broad decrease in power between 20 and 30 Hz from around 200 until 1600 msec for the dynamic faces compared with baseline as well as a more sustained decrease in lower-frequency power (8–15 Hz) from 1000 msec onward. Static faces compared with baseline showed a slight power decrease at 200 msec and again at 600 msec between 20 and 30 Hz. Finally, the direct contrast of dynamic and static faces re- vealed a sustained decrease in power from 800 msec on- ward between 8 and 30 Hz due to the decrease in power for dynamic faces. DISCUSSION This study sought to examine how changes in low- frequency (8–30 Hz) oscillatory brain activity contribute to processing dynamic facial expressions by exploring how neural oscillatory profiles differ during perception of dynamic and static facial displays. To this end, MEG was used to explore the spatiotemporal and spectral power differences between dynamic and static face pro- cessing using realistic dynamic face stimuli. Whole-head beamformer source analyses revealed, at the millisecond level, significant differences in oscillatory power within alpha and beta frequency ranges (8–30 Hz) during pro- cessing of dynamic faces compared with their static coun- terparts in occipito-temporal, insular, and frontal cortices. It was predicted that responses in these distinct regions of the distributed face perception network would display different temporal patterns of activation, whereby regions within occipital cortices would display earlier responses based on their role in early visual processing, followed by later responses in temporal and frontal corti- ces (Sato et al., 2015; Furl et al., 2010). This was con- firmed through both the whole-head beamformer source analyses and the time–frequency analyses of vir- tual electrodes, which were constructed at regions of peak activation within the “core” face perception network. Unlike many previous neurophysiologcal studies of dynamic face perception that have examined evoked Foley, Rippon, and Senior 345 activity (Miki & Kakigi, 2014; Watanabe et al., 2005; Puce et al., 2000), we have focused our analyses on modula- tions of induced cortical oscillatory activity, which has been relatively less well studied in this context. Our mo- tivation to focus on induced activity was guided by in- creasing evidence on the significant role of neural oscillations in cortical processing, where neural oscilla- tions are now believed to play a key role in binding and information transfer between brain regions (Roopun et al., 2008; Buzsáki & Draguhn, 2004; Engel et al., 2001). It is therefore important to examine information in the frequency domain in addition to the spatial and temporal domains, to gain a more thorough understanding of face processing in the brain. With a specific focus on dynamic face processing, there is evidence to suggest that percep- tion of biological motion and facial dynamics are associ- ated with modulations in alpha and beta frequency power in particular ( Jabbi et al., 2015; Popov, Miller, Rockstroh, & Weisz, 2013; Muthukumaraswamy et al., 2006; Singh et al., 2002). Hence, we concentrated our investigations on this range of frequencies (8–30 Hz) in this study. The group SAM source analysis revealed a distributed network of brain regions showing differential responses for dynamic relative to static stimuli. This was character- ized by greater decreases in low-frequency (8–30 Hz) power in response to viewing dynamic face stimuli. Early differential responses were identified in bilateral IOG within 200 msec of stimulus onset, followed by later re- sponses in regions such as MTG and STS within 800 msec of stimulus onset. Additional regions within the so-called extended system, including the insula, inferior frontal and middle frontal gyri, and precentral gyri, all showed significantly greater decreases in low-frequency (8– 30 Hz) oscillatory power for dynamic relative to static face stimuli from approximately 1000 msec onward. The activation of this network of regions in response to dynamic face stimuli is consistent with findings from pre- vious electrophysiological and neuroimaging studies (Sato et al., 2004, 2015; Foley et al., 2012). This provides converging evidence and additional insights into the role and neural signature of these regions during dynamic face perception. Furthermore, in this study, we endeav- ored to go beyond previous research on dynamic face processing by investigating oscillatory activity in more de- tail in specific ROIs. By computing virtual electrodes, it was possible to explore the profile of activity in specific regions within the face perception network and examine Figure 5. Group (N = 8) time– frequency findings in left MTG (LMTG) show significant decreases in low-frequency power ( p < .05). (A) Dynamic faces compared with baseline show a sustained power decrease around 200 msec between 20 and 30 Hz and a sustained power decrease around 500 msec between 10 and 15 Hz. (B) Static faces compared with baseline show a power decrease around 400 msec between 20 and 30 Hz. (C) Direct comparison of dynamic and static faces shows a power decrease around 200 msec between 12 and 25 Hz and a power decrease from 800 to 1500 msec between 8 and 30 Hz. Black box indicates time windows showing significant differences between dynamic and static conditions that were identified in the whole-head group SAM analysis. 346 Journal of Cognitive Neuroscience Volume 30, Number 3 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / j / o c n a r t i c e - p d l f / / / / 3 0 3 3 3 8 1 7 8 7 2 1 5 / j o c n _ a _ 0 1 2 0 9 p d . f b y g u e s t t o n 0 7 S e p e m b e r 2 0 2 3 Figure 6. Group (N = 8) time– frequency findings in right MTG (RMTG) show significant decreases in low-frequency power ( p < .05). (A) Dynamic faces compared with baseline show a sustained power decrease around 500 msec between 10 and 30 Hz. (B) Static faces compared with baseline show a power decrease from 400 to 800 msec between 10 and 30 Hz and a later decrease around 1600 msec between 8 to 25 Hz. (C) Direct comparison of dynamic and static faces shows a power increase from 400 to 800 msec between 10 and 20 Hz and a power decrease around 900 msec between 12 and 30 Hz. Black box indicates time windows showing significant differences between dynamic and static conditions that were identified in the whole-head group SAM analysis. l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / j / o c n a r t i c e - p d l f / / / / 3 0 3 3 3 8 1 7 8 7 2 1 5 / j o c n _ a _ 0 1 2 0 9 p d . f b y g u e s t t o n 0 7 S e p e m b e r 2 0 2 3 processing differences associated with dynamic and static faces in more detail. The IOG form part of the core system for early visual analysis of faces, as described in Haxby et al.’s (2000) model. Time–frequency analysis of the virtual SAM sen- sors that were constructed in bilateral IOG revealed dif- ferential patterns of activation for the dynamic and static face stimuli. There was a decrease in oscillatory power between 10 and 30 Hz in response to both the dynamic and static face stimuli within 200 msec of stimulus onset, but overall, there was a significantly greater and more sustained decrease in power in the dynamic face condi- tion. This early response in inferior occipital regions is consistent with evoked potential studies that have local- ized the major evoked response components (M100 and M170) that are known to occur during this period to oc- cipital regions (Sato et al., 2015; Itier & Taylor, 2002; Liu, Harris, & Kanwisher, 2002). However, it must be noted that the current analysis method was designed to detect changes in oscillatory activity over time windows of hun- dreds of milliseconds and is relatively insensitive to more time-locked transient activity occurring over short inter- vals of time such as evoked responses. The overall finding of significantly greater decreases in sustained alpha and beta power within bilateral IOG for dynamic relative to static faces suggests that the IOG is involved in processing both the static and dynamic face stimuli, but additional processing was required for the dynamic stimuli. This is consistent with fMRI data show- ing significant increases in BOLD activation in IOG for dy- namic face stimuli (Fox et al., 2009; Schultz & Pilz, 2009). The sustained response in IOG has important implica- tions for hierarchical feedforward face perception models, as it suggests that IOG not only is involved in early visual analysis but also may play a role in higher- level processing (Atkinson & Adolphs, 2011) such as facial expression analysis (Pitcher, Garrido, Walsh, & Duchaine, 2008), possibly due to afferent feedback connections from the STS (Furl, Henson, Friston, & Calder, 2015; Foley et al., 2012). Regions within bilateral MTG, which include area MT, also showed significantly greater decreases in alpha and beta power for dynamic relative to static faces. Area MT is a well-known region in motion processing in both the fMRI and MEG literature (e.g., Sato et al., 2004; Singh et al., 2002; Ahlfors et al., 1999), and previous fMRI Foley, Rippon, and Senior 347 studies have implicated regions of MTG in dynamic face processing (Sato et al., 2015; Foley et al., 2012; Trautmann et al., 2009). In an MEG study, Watanabe et al. (2005) identified area MT as a key region involved in facial motion processing. They examined evoked re- sponses to facial movements and found that both eye and mouth movements elicited responses in this region around 170 msec after stimulus onset, with larger re- sponses to eye movements. Area MT has also been shown to respond to biological motion between 100 and 200 msec of stimulus onset with EEG (Krakowski et al., 2011). In addition, Singh et al. (2002) reported a decrease in low-frequency power in MTG when partici- pants viewed point light displays of biological motion. Our results are consistent with previous findings showing that MTG are involved in processing biological motion including dynamic facial displays, which appears to be facilitated through sustained decreases in oscillatory power between 8 and 30 Hz. The STS is another important structure within the face perception network that has consistently been shown to be involved in motion processing, including biological motion (Krakowski et al., 2011; Jokisch, Daum, Suchan, & Troje, 2005; Singh et al., 2002), dynamic face process- ing (Furl et al., 2015; Jabbi et al., 2015; Foley et al., 2012; Sato et al., 2004), and multimodal integration (Hagan, Woods, Johnson, Green, & Young, 2013). Here, we found significant decreases in oscillatory power (8–30 Hz) in bilateral STS during dynamic face processing. Time– frequency analysis of the virtual SAM sensors constructed within regions of posterior STS revealed similar response patterns in the left and right STS sources. This was char- acterized by a brief decrease in power at ∼500 msec in both the static and dynamic conditions, which was then followed by a significantly stronger sustained decrease in power from 800 msec onward in the dynamic condition only. It would therefore appear then that the STS contrib- utes largely to the processing of dynamic faces and, to a lesser extent, static faces. This is consistent with results from neuroimaging fMRI studies, wherein the STS showed significantly greater activation for dynamic rela- tive to static faces (Foley et al., 2012; Trautmann et al., 2009; Sato et al., 2004). In line with this, an fMRI study by Pitcher et al. (2011) reported a region in the right pos- terior STS that responded almost three times more strongly to dynamic compared with static faces. Increases in theta power can also be seen in the time–frequency plots in bilateral STS. We speculate that this may represent cross-frequency power coupling between theta and alpha/ beta frequency ranges during face processing, consistent Figure 7. Group (N = 8) time– frequency findings in left STS (LSTS) show significant decreases in low-frequency power ( p < .05). (A) Dynamic faces compared with baseline show a sustained power decrease from 400 to 2000 msec around 12 Hz. (B) Static faces compared with baseline show a power decrease from 200 to 600 msec between 10 and 30 Hz. (C) Direct comparison of dynamic and static faces shows significant sustained decreases in power from 800 msec onward between 10 and 30 Hz. Black box indicates time windows showing significant differences between dynamic and static conditions that were identified in the whole-head group SAM analysis. 348 Journal of Cognitive Neuroscience Volume 30, Number 3 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / j / o c n a r t i c e - p d l f / / / / 3 0 3 3 3 8 1 7 8 7 2 1 5 / j o c n _ a _ 0 1 2 0 9 p d . f b y g u e s t t o n 0 7 S e p e m b e r 2 0 2 3 Figure 8. Group (N = 9) time– frequency findings in right STS (RSTS) show significant decreases in low-frequency power ( p < .05). (A) Dynamic faces compared with baseline show a power decrease between 20 and 30 Hz from 200 to 1600 msec and a sustained decrease in power between 8 and 15 Hz around 1000 msec. (B) Static faces compared with baseline show a power decrease between 20 and 30 Hz at 200 and 600 msec. (C) Direct comparison of dynamic and static faces shows significant sustained decreases in power from 800 msec onward between 8 and 30 Hz. Black box indicates time windows showing significant differences between dynamic and static conditions that were identified in the whole-head group SAM analysis. l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / j / o c n a r t i c e - p d l f / / / / 3 0 3 3 3 8 1 7 8 7 2 1 5 / j o c n _ a _ 0 1 2 0 9 p d . f b y g u e s t t o n 0 7 S e p e m b e r 2 0 2 3 with recent findings from Furl et al. (2014) using static face stimuli. However, as it was not the focus of this study, a more specific analysis would be required to investigate the effects of cross-frequency power coupling during naturalistic dynamic face processing. The STS possesses strong reciprocal connections with frontal and paralimbic regions as well as with visual cortical areas (Hein & Knight, 2008). This unique ar- rangement of connections allows the STS to act as a func- tional interface between regions involved in early visual perceptual processing and emotion processing (Hein & Knight, 2008). It has been proposed that modulations of oscillatory beta power in the STS associated with dynamic face processing may represent a mechanism to facilitate the integration of information from multiple input areas to attribute meaning to facial movements ( Jabbi et al., 2015). In line with this, it has been shown that occipital visual regions accumulate information from dynamic face stimuli over shorter time intervals and thus respond faster (165 msec), whereas the STS accumulates information over longer time intervals resulting in a later response (237 msec), thereby enabling the integration of informa- tion from multiple sites (Furl et al., 2010). This is consis- tent with the pattern of results observed in this study, showing earlier decreases in oscillatory power (8–30 Hz) in inferior temporal gyrus and MTG, followed by later modulations in oscillatory activity in STS in response to dynamic facial displays. Regions of the so-called extended face perception sys- tem, including sources in inferior and middle frontal gyri, left insula, and precentral gyrus, all showed significant de- creases in oscillatory power between 8 and 30 Hz in re- sponse to dynamic relative to static face stimuli. These regions were all identified as structures forming part of the dynamic face perception network in our previous fMRI study, where they showed significant increases in BOLD signal in response to dynamic facial displays, dem- onstrating cross-modal convergence. Inferior frontal gyri, in particular, have been implicated in the perception of dynamic faces in both fMRI and MEG studies (Sato et al., 2004, 2015; Foley et al., 2012; Fox et al., 2009; Schultz & Pilz, 2009). Inferior frontal regions are generally associated with top–down cognitive processes, hence the decreases in oscillatory power in inferior frontal sources identified in this study may reflect a form of additional top–down cognitive processing that is required for dy- namic faces. This may be due to the fact that dynamic faces contain more information to be coordinated com- pared with static faces (Arsalidou, Morris, & Taylor, 2011). The insula is believed to play an important role Foley, Rippon, and Senior 349 in emotion perception through its projections to the inferior prefrontal cortex and amygdala (Phelps et al., 2001). It has also been associated with empathy (Adolphs, 2009) and language processing (for a review, see Oh, Duerden, & Pang, 2014). Consistent with our findings, Jabbi et al. (2015) also reported decreased beta-band power in the left insula for dynamic face stimuli. This increased response for dynamic facial displays may be due to the increased salience of the dynamic face stimuli (Trautmann et al., 2009). In summary, we found that, in comparison with static pictures of faces, dynamic images of faces were asso- ciated with significantly greater modulations in alpha and beta oscillatory activity across a distributed network of regions. Notably, all of the identified sources corre- sponded very closely with regions of the dynamic face perception network that were identified with fMRI in a preceding study using similar experimental protocols. These findings demonstrate strong concordance be- tween two different imaging techniques and are consis- tent with a growing body of literature showing that the BOLD signal correlates negatively with alpha and beta oscillatory activity (Hall et al., 2014; Zumer et al., 2010; Singh et al., 2002) and adds support to the role of these signals in processing social stimuli (Engel & Fries, 2010). Our results therefore demonstrate that perception of re- alistic dynamic facial stimuli activates a distributed neural network at varying time points facilitated by modulations in low-frequency power within alpha and beta frequency ranges (8–30 Hz). An important aspect of this study that may have influ- enced the results is that two “active” states were com- pared in the SAM analysis rather than “active” versus “passive”; that is, dynamic faces were compared directly with static faces, not with baseline. In most studies using SAM analysis, an “active” state is compared with a “pas- sive” state, generally represented by a prestimulus base- line period (Lee et al., 2010; Muthukumaraswamy et al., 2006; Singh et al., 2002). However, in this MEG study, a direct comparison between two active states was made between the dynamic and static face stimuli. This meth- odology was employed to use a more robust control for the dynamic face stimuli and to maintain consistency with our previous fMRI paradigm (Foley et al., 2012). Direct contrasts may yield more focal oscillatory activa- tions than comparing prestimulus versus poststimulus responses (Kaiser et al., 2008). Hence, the patterns of activation described here may be more focal than those found in previous studies. A limitation of this study that should be addressed in future research is the use of the 1-back identity recogni- tion task, which may have biased face processing. This task was employed to ensure that participants maintained vigilance throughout and to facilitate comparison with our preceding fMRI study. However, the task may have influenced modulation in alpha activity in particular, as alpha oscillations are believed to play a role in optimizing performance in task-relevant regions and reducing pro- cessing in task-irrelevant regions ( Jensen & Mazaheri, 2010). This could be addressed in future studies by using task-free paradigms such as passive viewing of dynamic faces. It must also be noted that our task involved implicit recognition of facial expressions only and therefore the spatiotemporal and spectral profiles associated with ex- plicit face processing should be addressed in future work. In addition, we did not include a control for nonbiologi- cal or nonfacial motion in this study. Although a contrast with nonfacial motion could provide information on facial-specific dynamics, we were specifically interested in exploring how neural oscillatory profiles differ during perception of dynamic as opposed to static facial dis- plays. In line with Jabbi et al. (2015), our fMRI (Foley et al., 2012) and MEG analyses have elucidated the areas and mechanisms that are involved in biological facial motion while controlling for static information. In conclusion, ecological validity is an important aspect of experimental research. Neuroscience research in gen- eral, but particularly, vision and face perception research, is striving toward the use of more naturalistic and ecolog- ically valid stimuli and experimental designs ( Johnston et al., 2015; Hasson, Malach, & Heeger, 2010). This is ex- emplified by the growing trend of studying neural activa- tion under more natural viewing conditions, such as while watching movies (see Hasson et al., 2010, for a re- view). This level of research is necessary as it aims to es- tablish the functional significance of neural responses in natural conditions, which may have only been character- ized with artificial stimuli (Felsen & Dan, 2005). In this context, naturalistic dynamic face stimuli provide a better means of representing the complex nature of perceiving emotions from facial expressions in the real world. There- fore, authentic dynamic stimuli should be used to un- cover the neural correlates of natural face perception, where the ultimate goal is to progress from an under- standing of how static images of single faces are proc- essed to how real faces are perceived dynamically and interactively in the real world (Atkinson & Adolphs, 2011). Reprint requests should be sent to Elaine Foley, Aston Brain Centre, School of Life and Health Sciences, Aston University, Birmingham B4 7ET, UK, or via e-mail: e.foley@aston.ac.uk. REFERENCES Adjamian, P., Barnes, G. R., Hillebrand, A., Holliday, I. E., Singh, K. D., Furlong, P. L., et al. (2004). Co-registration of magnetoencephalography with magnetic resonance imaging using bite-bar-based fiducials and surface-matching. Clinical Neurophysiology, 115, 691–698. Adolphs, R. (2002a). Neural systems for recognizing emotion. Current Opinion in Neurobiology, 12, 169–177. Adolphs, R. (2002b). Recognizing emotion from facial expressions: Psychological and neurological mechanisms. Behavioral and Cognitive Neuroscience Reviews, 1, 21–62. 350 Journal of Cognitive Neuroscience Volume 30, Number 3 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / j / o c n a r t i c e - p d l f / / / / 3 0 3 3 3 8 1 7 8 7 2 1 5 / j o c n _ a _ 0 1 2 0 9 p d . f b y g u e s t t o n 0 7 S e p e m b e r 2 0 2 3 Adolphs, R. (2009). The social brain: Neural basis of social knowledge. Annual Review of Psychology, 60, 693–716. Ahlfors, S. P., Simpson, G. V., Dale, A. M., Belliveau, J. W., Liu, A. K., Korvenoja, A., et al. (1999). Spatiotemporal activity of a cortical network for processing visual motion revealed by MEG and fMRI. Journal of Neurophysiology, 82, 2545–2555. Ambadar, Z., Schooler, J. W., & Cohn, J. F. (2005). Deciphering the enigmatic face: The importance of facial dynamics in interpreting subtle facial expressions. Psychological Science, 16, 403–410. Arsalidou, M., Morris, D., & Taylor, M. J. (2011). Converging evidence for the advantage of dynamic facial expressions. Brain Topography, 24, 149–163. Atkinson, A. P., & Adolphs, R. (2011). The neuropsychology of face perception: Beyond simple dissociations and functional selectivity. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 366, 1726–1738. Buzsáki, G., & Draguhn, A. (2004). Neuronal oscillations in cortical networks. Science, 304, 1926–1929. Calder, A. J., & Young, A. W. (2005). Understanding the recognition of facial identity and facial expression. Nature Reviews Neuroscience, 6, 641–651. Donner, T. H., & Siegel, M. (2011). A framework for local cortical oscillation patterns. Trends in Cognitive Sciences, 15, 191–199. Engel, A. K., & Fries, P. (2010). Beta-band oscillations— Signalling the status quo? Current Opinion in Neurobiology, 20, 156–165. Engel, A. K., Fries, P., & Singer, W. (2001). Dynamic predictions: Oscillations and synchrony in top–down processing. Nature Reviews Neuroscience, 2, 704–716. Felsen, G., & Dan, Y. (2005). A natural approach to studying vision. Nature Neuroscience, 8, 1643–1646. Fisher, A. E., Furlong, P. L., Seri, S., Adjamian, P., Witton, C., Baldeweg, T., et al. (2008). Interhemispheric differences of spectral power in expressive language: A MEG study with clinical applications. International Journal of Psychophysiology, 68, 111–122. Foley, E., Rippon, G., Thai, N. J., Longe, O., & Senior, C. (2012). Dynamic facial expressions evoke distinct activation in the face perception network: A connectivity analysis study. Journal of Cognitive Neuroscience, 24, 507–520. Fox, C. J., Iaria, G., & Barton, J. J. S. (2009). Defining the face processing network: Optimization of the functional localizer in fMRI. Human Brain Mapping, 30, 1637–1651. Furl, N., Coppola, R., Averbeck, B. B., & Weinberger, D. R. (2014). Cross-frequency power coupling between hierarchically organized face-selective areas. Cerebral Cortex, 24, 2409–2420. Furl, N., Henson, R. N., Friston, K. J., & Calder, A. J. (2015). Network interactions explain sensitivity to dynamic faces in the superior temporal sulcus. Cerebral Cortex, 25, 2876–2882. Furl, N., van Rijsbergen, N. J., Kiebel, S. J., Friston, K. J., Treves, A., & Dolan, R. J. (2010). Modulation of perception and brain activity by predictable trajectories of facial expressions. Cerebral Cortex, 20, 694–703. Hagan, C. C., Woods, W., Johnson, S., Green, G. G. R., & Young, A. W. (2013). Involvement of right STS in audio-visual integration for affective speech demonstrated using MEG. PLoS One, 8, e70648. Hall, E. L., Robson, S. E., Morris, P. G., & Brookes, M. J. (2014). The relationship between MEG and fMRI. Neuroimage, 102, 80–91. Hasson, U., Malach, R., & Heeger, D. J. (2010). Reliability of cortical activity during natural stimulation. Trends in Cognitive Sciences, 14, 40–48. Haxby, J. V., Hoffman, E. A., & Gobbini, M. I. (2002). Human neural systems for face recognition and social communication. Biological Psychiatry, 51, 59–67. Hein, G., & Knight, R. T. (2008). Superior temporal sulcus—It’s my area: Or is it? Journal of Cognitive Neuroscience, 20, 2125–2136. Hillebrand, A., Singh, K. D., Holliday, I. E., Furlong, P. L., & Barnes, G. R. (2005). A new approach to neuroimaging with magnetoencephalography. Human Brain Mapping, 25, 199–211. Ishai, A. (2008). Let’s face it: It’s a cortical network. Neuroimage, 40, 415–419. l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / j / o c n a r t i c e - p d l f / / / / 3 0 3 3 3 8 1 7 8 7 2 1 5 / j o c n _ a _ 0 1 2 0 9 p d . f b y g u e s t t o n 0 7 S e p e m b e r 2 0 2 3 Itier, R. J., & Taylor, M. J. (2002). Inversion and contrast polarity reversal affect both encoding and recognition processes of unfamiliar faces: A repetition study using ERPs. Neuroimage, 15, 353–372. Jabbi, M., Kohn, P. D., Nash, T., Ianni, A., Coutlee, C., Holroyd, T., et al. (2015). Convergent BOLD and beta-band activity in superior temporal sulcus and frontolimbic circuitry underpins human emotion cognition. Cerebral Cortex, 25, 1878–1888. Jensen, O., & Mazaheri, A. (2010). Shaping functional architecture by oscillatory alpha activity: Gating by inhibition. Frontiers in Human Neuroscience, 4, 186. Johnston, P., Molyneux, R., & Young, A. W. (2015). The N170 observed “in the wild”: Robust event-related potentials to faces in cluttered dynamic visual scenes. Social Cognitive and Affective Neuroscience, 10, 938–944. Jokisch, D., Daum, I., Suchan, B., & Troje, N. F. (2005). Structural encoding and recognition of biological motion: Evidence from event-related potentials and source analysis. Behavioural Brain Research, 157, 195–204. Kaiser, J., Rahm, B., & Lutzenberger, W. (2008). Direct contrasts between experimental conditions may yield more focal oscillatory activations than comparing pre- versus post- stimulus responses. Brain Research, 1235, 63–73. Kilts, C. D., Egan, G., Gideon, D. A., Ely, T. D., & Hoffman, J. M. (2003). Dissociable neural pathways are involved in the recognition of emotion in static and dynamic facial expressions. Neuroimage, 18, 156–168. Krakowski, A. I., Ross, L. A., Snyder, A. C., Sehatpour, P., Kelly, S. P., & Foxe, J. J. (2011). The neurophysiology of human biological motion processing: A high-density electrical mapping study. Neuroimage, 56, 373–383. LaBar, K. S., Crupain, M. J., Voyvodic, J. T., & McCarthy, G. (2003). Dynamic perception of facial affect and identity in the human brain. Cerebral Cortex, 13, 1023–1033. Lee, L. C., Andrews, T. J., Johnson, S. J., Woods, W., Gouws, A., Green, G. G. R., et al. (2010). Neural responses to rigidly moving faces displaying shifts in social attention investigated with fMRI and MEG. Neuropsychologia, 48, 477–490. Liu, J., Harris, A., & Kanwisher, N. (2002). Stages of processing in face perception: An MEG study. Nature Neuroscience, 5, 910–916. Miki, K., & Kakigi, R. (2014). Magnetoencephalographic study on facial movements. Frontiers in Human Neuroscience, 8, 550. Miki, K., Takeshima, Y., Watanabe, S., Honda, Y., & Kakigi, R. (2011). Effects of inverting contour and features on processing for static and dynamic face perception: An MEG study. Brain Research, 1383, 230–241. Miki, K., Watanabe, S., Honda, Y., Nakamura, M., & Kakigi, R. (2007). Effects of face contour and features on early occipitotemporal activity when viewing eye movement. Neuroimage, 35, 1624–1635. Miki, K., Watanabe, S., Kakigi, R., & Puce, A. (2004). Haxby, J. V., Hoffman, E. A., & Gobbini, M. I. (2000). The distributed human neural system for face perception. Trends in Cognitive Sciences, 4, 223–233. Magnetoencephalographic study of occipitotemporal activity elicited by viewing mouth movements. Clinical Neurophysiology, 115, 1559–1574. Foley, Rippon, and Senior 351 Muthukumaraswamy, S. D., Johnson, B. W., Gaetz, W. C., & Cheyne, D. O. (2006). Neural processing of observed oro-facial movements reflects multiple action encoding strategies in the human brain. Brain Research, 1071, 105–112. Oh, A., Duerden, E. G., & Pang, E. W. (2014). The role of the insula in speech and language processing. Brain and Language, 135, 96–103. Phelps, E. A., O’Connor, K. J., Gatenby, J. C., Gore, J. C., Grillon, C., & Davis, M. (2001). Activation of the left amygdala to a cognitive representation of fear. Nature Neuroscience, 4, 437–441. Pitcher, D., Dilks, D. D., Saxe, R. R., Triantafyllou, C., & Kanwisher, N. (2011). Differential selectivity for dynamic versus static information in face-selective cortical regions. Neuroimage, 56, 2356–2363. Pitcher, D., Garrido, L., Walsh, V., & Duchaine, B. C. (2008). Transcranial magnetic stimulation disrupts the perception and embodiment of facial expressions. Journal of Neuroscience, 28, 8929–8933. Popov, T., Miller, G. A., Rockstroh, B., & Weisz, N. (2013). Modulation of α power and functional connectivity during facial affect recognition. Journal of Neuroscience, 33, 6018–6026. Puce, A., & Perrett, D. (2003). Electrophysiology and brain imaging of biological motion. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 358, 435–445. Puce, A., Smith, A., & Allison, T. (2000). ERPs evoked by viewing facial movements. Cognitive Neuropsychology, 17, 221–239. Robinson, S., & Vrba, J. (1999). Functional neuroimaging by synthetic aperture magnetometry (SAM). In T. Yoshimoto, M. Kotani, S. Kuriki, H. Karibe, & N. Nakasato (Eds.), Recent advances in biomagnetism (pp. 302–305). Sendai, Japan: Tohoku University Press. Roopun, A. K., Kramer, M. A., Carracedo, L. M., Kaiser, M., Davies, C. H., Traub, R. D., et al. (2008). Temporal interactions between cortical rhythms. Frontiers in Neuroscience, 2, 145–154. Sato, W., Kochiyama, T., Uono, S., & Yoshikawa, S. (2008). Time course of superior temporal sulcus activity in response to eye gaze: A combined fMRI and MEG study. Social Cognitive and Affective Neuroscience, 3, 224–232. Sato, W., Kochiyama, T., Yoshikawa, S., Naito, E., & Matsumura, M. (2004). Enhanced neural activity in response to dynamic facial expressions of emotion: An fMRI study. Brain Research, 20, 81–91. Schultz, J., & Pilz, K. S. (2009). Natural facial motion enhances cortical responses to faces. Experimental Brain Research, 194, 465–475. Schyns, P. G., Thut, G., & Gross, J. (2011). Cracking the code of oscillatory activity. PLoS Biology, 9, e1001064. Singh, K. D., Barnes, G. R., Hillebrand, A., Forde, E. M. E., & Williams, A. L. (2002). Task-related changes in cortical synchronization are spatially coincident with the hemodynamic response. Neuroimage, 16, 103–114. Trautmann, S. A., Fehr, T., & Herrmann, M. (2009). Emotions in motion: Dynamic compared to static facial expressions of disgust and happiness reveal more widespread emotion- specific activations. Brain Research, 1284, 100–115. Ulloa, J. L., Puce, A., Hugueville, L., & George, N. (2014). Sustained neural activity to gaze and emotion perception in dynamic social scenes. Social Cognitive and Affective Neuroscience, 9, 350–357. Vuilleumier, P., & Pourtois, G. (2007). Distributed and interactive brain mechanisms during emotion face perception: Evidence from functional neuroimaging. Neuropsychologia, 45, 174–194. Watanabe, S., Kakigi, R., & Puce, A. (2001). Occipitotemporal activity elicited by viewing eye movements: A magnetoencephalographic study. Neuroimage, 13, 351–363. Watanabe, S., Miki, K., & Kakigi, R. (2005). Mechanisms of face perception in humans: A magneto- and electro- encephalographic study. Neuropathology, 25, 8–20. Wheaton, K. J., Pipingas, A., Silberstein, R. B., & Puce, A. (2001). Human neural responses elicited to observing the actions of others. Visual Neuroscience, 18, 401–406. Zumer, J. M., Brookes, M. J., Stevenson, C. M., Francis, S. T., & Sato, W., Kochiyama, T., & Uono, S. (2015). Spatiotemporal neural network dynamics for the processing of dynamic facial expressions. Scientific Reports, 5, 1–13. Morris, P. G. (2010). Relating BOLD fMRI and neural oscillations through convolution and optimal linear weighting. Neuroimage, 49, 1479–1489. 352 Journal of Cognitive Neuroscience Volume 30, Number 3 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / j / o c n a r t i c e - p d l f / / / / 3 0 3 3 3 8 1 7 8 7 2 1 5 / j o c n _ a _ 0 1 2 0 9 p d . f b y g u e s t t o n 0 7 S e p e m b e r 2 0 2 3Modulation of Neural Oscillatory Activity image
Modulation of Neural Oscillatory Activity image
Modulation of Neural Oscillatory Activity image
Modulation of Neural Oscillatory Activity image
Modulation of Neural Oscillatory Activity image
Modulation of Neural Oscillatory Activity image
Modulation of Neural Oscillatory Activity image
Modulation of Neural Oscillatory Activity image

Télécharger le PDF