Exploring Theater Neuroscience: Using Wearable

Exploring Theater Neuroscience: Using Wearable
Functional Near-infrared Spectroscopy to
Measure the Sense of Self and Interpersonal
Coordination in Professional Actors

Dwaynica A. Greaves1,2*, Paola Pinti2*, Sara Din2, Robert Hickson2,
Mingyi Diao2, Charlotte Lange2, Priyasha Khurana2, Kelly Hunter3,
Ilias Tachtsidis2, and Antonia F. de C. Hamilton2

抽象的

■ Ecologically valid research and wearable brain imaging are
increasingly important in cognitive neuroscience as they enable
researchers to measure neural mechanisms of complex social
behaviors in real-world environments. This article presents a proof
of principle study that aims to push the limits of what wearable
brain imaging can capture and find new ways to explore the
neuroscience of acting. 具体来说, we focus on how to build an
interdisciplinary paradigm to investigate the effects of taking on a
role on an actor’s sense of self and present methods to quantify
interpersonal coordination at different levels (脑, physiology,
行为) as pairs of actors rehearse an extract of a play prepared
for live performance. Participants were six actors from Flute
Theatre, rehearsing an extract from Shakespeare’s A Midsummer

Night’s Dream. Sense of self was measured in terms of the
response of the pFC to hearing one’s own name (compared with
another person’s name). Interpersonal coordination was mea-
sured using wavelet coherence analysis of brain signals, 心-
节拍, breathing, and behavior. Findings show that it is possible
to capture an actor’s pFC response to their own name and that this
response is suppressed when an actor rehearses a segment of the
玩. 此外, we found that it is possible to measure interper-
sonal synchrony across three modalities simultaneously. 这些
methods open the way to new studies that can use wearable neu-
roimaging and hyperscanning to understand the neuroscience of
social interaction and the complex social–emotional processes
involved in theatrical training and performing theater.

介绍

Theater has the power to portray reality or to create alter-
native realities that tell stories in a powerful and engaging
fashion. The creation of imaginary worlds and narratives
leads to actors taking on new social roles. We speculate
that trained actors may have particular expertise in social
相互作用, as they are able to present different characters
to an audience and recreate the same convincing social
interaction night after night on the stage. There is a mutual
understanding between the actors and the audience that
the event on the stage is a pretense; 例如, if two
characters have a parent–child relationship on stage, 这
is not a real relationship; it is subject to the storyline and
ceases to exist outside the performance (戈德斯坦 &
Bloom, 2011). Although many people enjoy the social
and dynamic nature of theater, little is known about
the neural and cognitive processes that enable actors

1Goldsmiths University of London, 英国, 2大学
College London, 英国, 3Flute Theatre, 伦敦,
英国
*D. A. Greaves and P. Pinti contributed equally to this study.

© 2022 麻省理工学院

去做这个. In the growing domain of neuroaesthetics,
researchers are beginning to understand how dance, visual
艺术, and music impact our brain and cognitive processes
(Omigie et al., 2015; 柯克, Skov, Hulme, Christensen, &
Zeki, 2009; Calvo-Merino, Jola, Glaser, & Haggard, 2008;
叉, 汉密尔顿, & Grafton, 2006). 然而, 少得多
is known about theater.

The present article introduces new ways in which neu-
roscientists can study theater. We focus on behavior and
brain systems in actors, and specifically the cognitive pro-
cesses involved when actors take on a role. 更确切地说,
this article aims to (1) push the boundaries of wearable
neuroimaging systems to capture pairs of people moving
about a theater-rehearsal space, (2) find new methods to
study how taking on a role impacts an actor’s sense of self,
和 (3) present methods to explore patterns of naturalistic
social interactions through the quantification of their
interpersonal coordination at different levels (脑, phys-
iology, 行为). To achieve these goals, we present a
novel study using a multimodal platform including
functional near-infrared spectroscopy (fNIRS), systemic
physiology, and behavior (motion capture and video
录音) to capture the neural, 生理, 和

认知神经科学杂志 34:12, PP. 2215–2236
https://doi.org/10.1162/jocn_a_01912

D

w
n

A
d
e
d

F
r


H

t
t

p

:
/
/

d

r
e
C
t
.


t
.

e
d

/
j

/


C
n
A
r
t

C
e

p
d

F
/

/

/

3
4
1
2
2
2
1
5
2
0
5
6
5
2
2

/

/
j


C
n
_
A
_
0
1
9
1
2
p
d

.

F


y
G

e
s
t

t


n
0
8
S
e
p
e


e
r
2
0
2
3

behavioral signatures of pairs of actors in rehearsal for a
Shakespeare performance, and present two possible
approaches to analyzing this complex data set as proof
of principle. 第一的, we explain why theater-neuroscience
is important and outline some of the factors that must
be considered in designing and implementing research
in this domain.

The Importance of Theater-neuroscience Research

Understanding the neurocognitive mechanisms of acting
and performance is important for neuroscience, 艺术
和教育. For neuroscience, acting may provide an
opportunity to study complex interactions between peo-
ple in a controlled and reproducible fashion. There is an
increasing interest among researchers in “real-world neu-
roscience” (Bevilacqua et al., 2018; Cruz-Garza et al.,
2017), second-person neuroscience (Schilbach et al.,
2013), and embodied cognition (马丁, Kessler, 库克,
黄, & Meinzer, 2020; Wilson, 2002), but it has some-
times been challenging to design paradigms in these
new areas. Theater may offer an important way to move
beyond the static and often computer-based stimuli used
in experiments carried out in isolation and examine the
dynamics of social and emotional engagement involved
in complex and meaningful contexts. This is valuable in
several ways. 第一的, two actors rehearsing a scene are able
to perform the same actions repeatedly and reproducibly,
allowing multiple recordings of data to be collected from
他们, but will also retain the socio-emotional meaning of
the scene in each performance. 因此, studying the actors
can enable the investigation of the fundamental processes
that coordinate speech and motor behavior between peo-
ple in a reproducible setting. 例如, the cognitive
processes of the actors playing Titania and Oberon in
Midsummer Night’s Dream, 例如, cannot be
assumed to match those of real fairy lovers, but the audi-
保守党, visual, and motor performance of the actors do draw
on real social interactions. 第二, the question of what it
需要, cognitively, to become a character through the
rehearsal process is an important and rarely examined
问题. One recent study suggests that playing a role
may alter activation in brain regions linked to the sense
of self (棕色的, Cockett, & Yuan, 2019), and we examine
this possibility in a later section. Further examination of
the question of how actors take on roles may give impor-
tant insights into the sense of self and the neural mecha-
nisms of pretense and imagination. 最后, 互动
between actors and naive participants in conjunction with
brain imaging may provide a powerful way to explore a
range of types of social interaction, extending examples
of classic social psychology to modern neuroscience
(Remland, 琼斯, & Brinkman, 1995).

For the arts and education, theater neuroscience is
important as it sheds light on the neural processes that
occur during rehearsal and performance. Understanding
the similarities and differences in the activity of neural

circuits could elucidate differences between different
types of acting methods and techniques and will help cre-
ate an interdisciplinary approach to theater. It could also
help us evaluate the use of theater in education. In partic-
他们是, researchers are interested in whether the techniques
used in acting that engage the social brain may lead to
actors having better mentalizing and empathy skills than
nonactors (戈德斯坦 & Bloom, 2011). 总结一下,
there are many reasons to believe that the study of theater
and acting has value to researchers working in cognitive
and social neuroscience and has wider implications for
the arts and education. 然而, there are also many
challenges to research in this untapped domain. 在里面
following sections, we outline how these challenges can
be faced.

Capturing Data from Actors during Performance

The vast majority of research into the neural mechanisms
of cognition uses fMRI brain imaging, where participants
must lie down and remain still in isolation in a noisy and
unnaturalistic scanner while brain images are captured.
Such studies have provided a wealth of information on
basic cognitive processes but are limited when the focus
of our research is flexible social interactions with other
人们 (Risko, 理查森, & Kingstone, 2016; Schilbach
等人。, 2013). New portable wearable technologies that
include mobile EEG (Bevilacqua et al., 2018), fNIRS (Pinti
等人。, 2020), 眼球追踪 (Bianchi, Kingstone, & Risko,
2020), and motion capture ( Vlasic et al., 2007) are now
allowing researchers to move out of the laboratory and
understand neurocognitive processes in people freely
moving in the real world. 这里, we employ many of these
wearable sensors in a hyperscanning configuration to
monitor interpersonal synchrony in pairs of actors.

We used two wearable fNIRS systems to measure the
actors’ brain activation patterns in the pFC. fNIRS is a
noninvasive optical neuroimaging technique that uses
near-infrared light to measure the changes in brain oxy-
genation and hemodynamics. It is based on neurovascular
coupling and gathers an indirect measure of brain activity
through the quantification of the changes in oxygenated
(HbO2) and deoxygenated (HHb) hemoglobin (Pinti
等人。, 2020). fNIRS has distinct advantages and disadvan-
tages compared with other neuroimaging techniques
(见表 1 in Pinti et al., 2020, for a summary), but what
makes it particularly suitable for ecologically valid inves-
tigations of brain function within social environments is
its superior robustness to movements with respect to
other neuroimaging modalities, portability, and the avail-
ability of wireless instrumentation (Pinti, Aichelburg,
等人。, 2015).

Given that “interacting brains exist within interacting
bodies” (汉密尔顿, 2021), we monitored bodily coordina-
tion alongside fNIRS to aid the interpretation of hyper-
scanning brain data by measuring physiological changes
in both actors using wearable chest straps and capturing

2216

认知神经科学杂志

体积 34, 数字 12

D

w
n

A
d
e
d

F
r


H

t
t

p

:
/
/

d

r
e
C
t
.


t
.

e
d

/
j

/


C
n
A
r
t

C
e

p
d

F
/

/

/

3
4
1
2
2
2
1
5
2
0
5
6
5
2
2

/

/
j


C
n
_
A
_
0
1
9
1
2
p
d

.

F


y
G

e
s
t

t


n
0
8
S
e
p
e


e
r
2
0
2
3

the actors’ behavior and movements using wearable
motion capture suits and video recordings. This is critical
to better understand how embodied social interactions
impact brain-to-brain synchrony.

Acting and the Sense of Self

A major challenge for naturalistic neuroscience research is
integrating well-controlled experimental manipulations
into free and unstructured natural behavior. 这里, 我们
use the classic “self-name effect” as a way to probe neural
engagement when people are acting or not acting. A per-
son’s own name is a highly salient and attention-grabbing
signal, which has increased priority (Gronau, 科恩, &
Ben-Shakhar, 2003; 夏皮罗, Caldwell, & Sorensen,
1997); the most well-known example of this being the
cocktail party phenomenon (Cherry, 1953). Studies inves-
tigating the underlying neural correlates of hearing one’s
name consistently identify increased activity of the medial
前额皮质 (mPFC; Holeckova et al., 2008; Carmody
& Lewis, 2006). 尤其, Kampe, Frith, and Frith
(2003) used fMRI to investigate hearing one’s own name
compared with a random name. Participants were
required to press a button when they heard a surname
compared with a first name. Results found significant
activation of the superior frontal gyrus (SFG) only when
hearing their own name compared with a random name.
Their results provide support for the notion that calling
someone’s name is an important social cue that implies
intent to communicate. Similar effects have been reported
in infants using fNIRS (Imafuku, Hakuno, Uchida-Ota,
Yamamoto, & Minagawa, 2014) demonstrating the robust-
ness of this phenomenon. 因此, it seems likely that brain
responses to hearing one’s own name could provide a
marker of the sense of self. 然而, it is not clear if
fNIRS in adults is sufficiently sensitive to pick up these
effects, especially in a complex context with other activi-
ties going on.

There is reason to believe that acting might impact neu-
ral responses related to the sense of self, as seen in a recent
innovative study conducted by Brown et al. (2019). 他们的
paper describes how, phenomenologically, actors take on
a fictional first-person perspective on the role they inhabit,
assuming the characteristics, thoughts, and actions of
another person for the duration of the play. This is a result
of a rigorous third-person perspective analysis of the char-
acter via various acting techniques leading up to and
through the rehearsal period. To test this idea, 棕色的
等人. (2019) conducted an fMRI study of professional
actors reading parts from Romeo and Juliet. They found
that a reduced engagement of self-related brain systems
(dorsomedial pFC, SFG, and the ventral medial pFC) 发生
when actors were responding to questions “in character.”
因此, they suggested that acting involves a suppression
of self-processing.

In the present study, we opted to test if the “own name
effect” can be seen in actors as they rehearse a piece of

theater actively in character for a play that would be per-
formed live on stage, and if the suppression of self (棕色的
等人。, 2019) might also be visible. To give a concrete
例子, if an actor named “Nica” is performing a simple
motor task with another actor “Jacob,” we would expect
Nica’s pFC to respond if she hears a voice calling her
own name (Nica) but not her partner’s name ( 雅各布).
然而, if the same name-calls are made while Nica is
playing the role of Titania in a scene with “Jacob” in the
role of Bottom and then if Nica has suppressed her sense
of self to act as Titania, we might expect less engagement
of pFC on hearing the name-call Nica. Her response to
hearing Jacob should remain similar to a control condi-
的. 然而, it is also possible that the impact of the
complex social-motor task of acting would drown out
any impact of name-calls on pFC, or that our fNIRS device
is not sensitive enough to measure the relevant signals
during acting. 因此, the present study aims to determine
if in a proof-of-principle approach, responses to name-
calls can be seen in adult actors and if they might change
when acting.

Interpersonal Synchrony across Multiple Levels

As well as taking on a role as an individual, acting often
requires a dynamic interaction between two or more peo-
普莱. 因此, theater neuroscience provides us with an
opportunity to explore these interpersonal dynamics.
There is increasing evidence that people engaged in social
interactions coordinate their behavior, 大脑, and physio-
logical signals across multiple levels. Brain synchrony
across pairs of participants engaged in a coordinated task
has been reported in many contexts (Cui, Bryant, & Reiss,
2012) including conversation ( Jiang et al., 2012) 和
puzzle-solving (Fishburn et al., 2018). It seems likely that,
during many joint actions, the neural activity in one brain
can be coupled to the neural activity in the other person’s
brain both via verbal communication (例如, speaker–
listener during a conversation [Hirsch, Noah, 张,
Dravida, & Ono, 2018; Stephens, Silbert, & Hasson,
2010]) and nonverbal communication (例如, hand gestures,
eye-to-eye contact, facial expressions [Noah et al., 2020]).
Previous research has also shown that the brain activity
itself can be influenced by the social signals expressed by
the other person (Hasson, Ghazanfar, 加兰图奇, Garrod,
& Keysers, 2012). 所以, by capturing brain activity
from two people at once, it is possible to quantify and local-
ize any coherent activity across the two brains, 这可能
help us understand the neural mechanisms of social coor-
dination (汉密尔顿, 2021). Brain-to-brain coupling (IE。,
intersubject synchrony) is typically quantified in terms of
the correlation or cross-brain coherence occurring between
the neural signals of the two brains (Astolfi et al., 2020; Cui
等人。, 2012) and can occur across the pFC (Noah et al., 2020;
Jiang et al., 2012).

In separate studies, researchers have also quantified
physiological synchrony between pairs of participants

Greaves et al.

2217

D

w
n

A
d
e
d

F
r


H

t
t

p

:
/
/

d

r
e
C
t
.


t
.

e
d

/
j

/


C
n
A
r
t

C
e

p
d

F
/

/

/

3
4
1
2
2
2
1
5
2
0
5
6
5
2
2

/

/
j


C
n
_
A
_
0
1
9
1
2
p
d

.

F


y
G

e
s
t

t


n
0
8
S
e
p
e


e
r
2
0
2
3

engaged in joint tasks. Previous studies suggest that inter-
acting individuals might be more likely to become syn-
chronized in heart rate (人力资源) and breathing rate (BR) 作为
they coordinate their behaviors and share emotional
状态. 例如, Konvalinka et al. (2011) found that
similar HR dynamics occurred in a religious fire-walking
ritual only between performers and relevant spectators
but not with irrelevant viewers. Concurrent patterns of
HR changes were also found when building Lego models
一起 (Fusaroli, Bjørndahl, Roepstorff, & Tylén, 2016).
相似地, Helm, Sbarra, and Ferrer (2012) further demon-
strated that shared BR dynamics occur when romantic
couples complete a series of tasks and share emotional
arousal. In a recent review, Palumbo, Marraccini, Weyandt,
and Wilder-Smith (2016) summarize how physiological syn-
chrony can act as an index of the state of an interpersonal
relationship.

Coordination between people is also seen in terms of
行为. Its common forms include shaking the body
一起, dancing together, walking together, hitting the
beat together, and swinging wrists together (理查森
& 戴尔, 2005). Launay, Tarr, and Dunbar (2016) proposed
that interpersonal synchronization can be defined as when
individuals establish a stable relationship with others and
have simultaneity in their actions, 那是, their thoughts,
情怀, and behaviors become synchronized.

These studies of interpersonal coordination have each
focused on different measures (brain/heart/ behavior),
and few studies that we are aware of have examined coor-
dination across multiple modalities. 这里, we take advan-
tage of our multimodal platform and present analytical
methods to examine interpersonal coordination across
all these different levels as a proof of principle. We use
the common and well-established wavelet coherence
method to quantify interpersonal coordination (Cui
等人。, 2012; Issartel, Marin, & Cadopi, 2007) 比较
the average coherence level of true pairs of participants
to pseudopairs created when the same data are shuffled
to create pairs that did not really exist (IE。, pseudodyads).
Our sample size does not permit strong hypotheses, 但
we can use our multimodal data to present analytical
methods to test if interpersonal coordination is found
in different measures (brain activity, 人力资源, BR, and acceler-
ometer), and if this coordination is greater for real dyads
compared with pseudodyads. We can also explore coor-
dination across different frequency bands, 这可能
help us target specific effects in future studies.

Summary of This Study

总结一下, this study aims to take neuroscientific
measurement into the world of theater. As an exploration
of the usefulness of wearable neuroimaging in theater,
we investigate the sense of self and interpersonal coordi-
nation in professional actors as they rehearse an extract
from a play. We aim to push technological boundaries in
combining multiple modalities of data capture, 包括

fNIRS (closely looking at activation in prefrontal regions)
and physiological and behavioral recordings. This proof-
of-principle will open the way to future research in the
domain of theater-neuroscience and other rich social
互动.

方法

Participants and Recording Sessions

Participants were recruited through a local theater com-
公司 (Flute Theatre) based in London. This is a small the-
ater group who perform both traditional Shakespeare
plays and interactive adaptions of Shakespeare for individ-
uals with autism and their families. During the period of
the study, the six actors were also working together to
rehearse a stage play and all the actors were highly familiar
with each other and with the core acting tasks used in this
学习. Six actors (three male) were recruited with a mean
age of 26.5 年. Participants were healthy with no known
psychiatric or neurological impairments. They were com-
pensated for their participation. This study was approved
by the University College London (UCL) Research Ethics
委员会, and written informed consent was obtained
from all participants. Data were collected over 4 days with
19 sessions of data captured in total. Two actors took part
in each session (always in the same pairs).

Studies of the neural mechanisms engaged in actors
performing theater have never been performed before,
and there is no prior work on which to base a formal
power analysis. Our sample size of six participants (全部
the actors available) may seem small, but it was essential
to work with actors who were familiar with this approach
to Shakespeare and the specific pieces used in this study.
Each participant took part in multiple data collection ses-
sions giving a total of 19 data sets for analysis. 因此, 为了
statistical purposes, our sample size is n = 19, 这是
comparable to previous infant studies of the sense of self
in mPFC (Imafuku et al., 2014), fMRI studies of acting
(Brown et al., 2019), and studies of aesthetic perception
in performing arts (Calvo-Merino et al., 2008).

Experimental Tasks

During each recording session, actors performed three
different tasks in short blocks presented in a pseudoran-
dom order. These were: Walking and Speaking (控制)
and Acting. In the Walking task, the two participants were
instructed to move around the space at a walking pace
without interacting with each other. Each block of Walking
lasted 45 秒. In cognitive terms, this task requires whole
body movement and a minimal degree of coordination to
avoid bumping into the other person.

In the Speaking task, the two actors stood side-by-side
facing the “audience” (至少 10 attentive people includ-
ing other actors and the research team) and read lines
from Shakespeare. The actors were given the text on a

2218

认知神经科学杂志

体积 34, 数字 12

D

w
n

A
d
e
d

F
r


H

t
t

p

:
/
/

d

r
e
C
t
.


t
.

e
d

/
j

/


C
n
A
r
t

C
e

p
d

F
/

/

/

3
4
1
2
2
2
1
5
2
0
5
6
5
2
2

/

/
j


C
n
_
A
_
0
1
9
1
2
p
d

.

F


y
G

e
s
t

t


n
0
8
S
e
p
e


e
r
2
0
2
3

printed page and read alternate lines until the end of
the text was reached or the time limit of 45 sec was up.
The texts chosen were prologues from four Shakespeare
戏剧 (Romeo and Juliet, Henry V, Henry VIII, 和
Pericles). These were chosen because they have a strong
rhythm in Shakespearean language but are not delivered
by a particular character. In cognitive terms, this task
requires speech and turn-taking but does not involve
remembering lines, coordinating actions, or taking on a
strong role.

In the Acting task, two different Shakespeare scenes
were used, each requiring full engagement from the two
actors. Both scenes were selected from Flute Theatre’s
adaptation of A Midsummer Night’s Dream for children
with autism, in which short excerpts of the play can be
performed repeatedly to allow the children to engage
in socially interactive behaviors. This repetition of short
elements made the scenes particularly suitable as a cog-
nitive task, and all the actors involved were very familiar
with the scenes, having performed the play before. 因此,
they can interpret these short scenes in the context of
their deep understanding of the full play.

In the “Titania” scene, one actor has the role of the fairy
queen Titania and the other has the role of Bottom the
donkey, which is indicated by holding hands to the sides
of the head as “ears.” Titania moves about the space until
she can make eye contact with Bottom and then says
“Doy-yo-yo-yoing; I love thee!” while making an exagger-
ated hand gesture. 作为回应, Bottom becomes alarmed
and turns his back on Titania. Titania then moves around
the space to capture Bottom’s gaze again, and the scene
can repeat as many times as needed. In the “Cobweb”
场景, one actor has the role of Cobweb the spider,
whereas the other has the role of Bee. The two characters
start on opposite sides of the stage, with the Bee moving
关于. When Cobweb throws out her hands to catch the
Bee, the Bee freezes and then moves slowly toward
Cobweb, in time with Cobweb’s hand movements.
Cobweb hugs the Bee, and the Bee screams and tilts his
head as if dead. Cobweb then moves away, 小心
watching the Bee to do a victory dance, but the Bee
revives and moves back to the other side of the stage.
The scene can repeat as many times as needed. For each
of these two scenes, the actors were instructed to do
“Cobweb” or “Titania” but then could select who took
which role. They would perform the sequence 2 或者
3 times before swapping roles and performing again until
the time limit (120 秒) ran out. The actors found it easy
to negotiate the roles and swap as needed without words
or delays.

These short scenes were selected because they include
strong social interaction (eye contact, hugs) and close
interpersonal coordination. Each action sequence
depends on the partner’s actions to maintain the correct
定时, as the actors create the characters. In cognitive
条款, the acting task requires visuomotor coordination,
social interaction, and careful executive control to make

the interaction work, as well as the adoption of the partic-
ular role.

Name-call Events

In addition to the three basic tasks described above, 我们
imposed “name-call” events on the actors to test how their
pFC responds to hearing their own name (or their part-
ner’s name). Name-call stimuli were prerecorded on the
experimental computer in the voice of one member of
the experimental team, who also instructed the actors to
start/end each task. At pseudorandom points during each
审判, the actors would hear one of their names called out.
Each name-call event acts as a “self-name” trial for one
actor and as an “other-name” trial for their partner, allow-
ing us to collect more events in the time available. 演员
were told that they would hear names called out but
should ignore it and continue with their tasks.

有 17 “Name” trials in each experimental
session, 正在发生 12 times in the control trials and
5 times in the acting trials. 全面的, the “name-call” events
fall into a 2 × 2 factorial design with factors: 姓名 (自己,
其他) × task (控制, acting). Examples of trial timings
are given in Figure 1. This allows us to test if participants’
response to their own name changes when they are acting
a role.

Data Acquisition

Data collection took place in a theater space (布卢姆斯伯里
theater studio, UCL). A sketch of the experimental area is
shown in Figure 2A.

To capture the events in as much detail as possible and
to track both participants’ brain, 行为, and physiology,
we used a multimodal wearable and wireless platform on
both actors at the same time (图2B). This included the
following equipment: (1) Neuroimaging: a 22-channel
wearable fNIRS system (LIGHTNIRS) sampled brain
hemodynamic/oxygenation changes over pFC at 13.33 赫兹.
Optodes arrangement and channel configuration are
shown in Figure 3A.

To place the fNIRS cap in a reliable way across partici-
pants, we used the 10–20 electrode placement system to
locate Channel 19 in correspondence with the Fpz point
(10% of the Nasion-Inion distance). The Montreal Neuro-
logical Institute (MNI) and anatomical locations of each
channel are listed in Table 1; (2) Behavior: Actors wore
full-body mocap suits (Perception Neuron) that capture
the location of the head and limbs with 18 intertial mea-
surement unit (IMU)/magnetic markers at 120 赫兹. Actors’
movements were also recorded by means of an accelerom-
eter placed in a wearable belt (EQ02 LifeMonitor, Equivi-
的). Audio and video recordings of the whole room were
captured as well as a video camera (Sony Handycam
HDR-CX405) for behavioral coding of participants’ perfor-
曼斯; (3) Physiology: Changes in heart and respiration
rates were monitored using a wearable belt (EQ02
LifeMonitor, Equivital) worn around the chest; 这

Greaves et al.

2219

D

w
n

A
d
e
d

F
r


H

t
t

p

:
/
/

d

r
e
C
t
.


t
.

e
d

/
j

/


C
n
A
r
t

C
e

p
d

F
/

/

/

3
4
1
2
2
2
1
5
2
0
5
6
5
2
2

/

/
j


C
n
_
A
_
0
1
9
1
2
p
d

.

F


y
G

e
s
t

t


n
0
8
S
e
p
e


e
r
2
0
2
3

D

w
n

A
d
e
d

F
r


H

t
t

p

:
/
/

d

r
e
C
t
.


t
.

e
d

/
j

/


C
n
A
r
t

C
e

p
d

F
/

/

/

3
4
1
2
2
2
1
5
2
0
5
6
5
2
2

/

/
j


C
n
_
A
_
0
1
9
1
2
p
d

.

F


y
G

e
s
t

t


n
0
8
S
e
p
e


e
r
2
0
2
3

数字 1. Task design. During each recording session, actors performed three tasks (walking and speaking [控制], acting) in a pseudorandom
命令. In the acting task, the actors played two scenes adapted from Shakespeare’s Midsummer Night’s Dream (Cobweb and Titania). Name-call
events were added at random points during each trial. The actors would hear their own name or their partner’s name called out.

electrocardiogram was recorded at 256 Hz and respiration
在 25.6 Hz and then used to compute the heart and respi-
ration rates (in bpm) every second/1 Hz.

The timing of each data collection session was con-
trolled by a program written in Cogent, which ran on a
laptop. This program determined the order of the tasks
performed by the actors and displayed a written

instruction for the next task block on the screen. A mem-
ber of the research team would tell the actors what to do,
check they were ready, and then recorded the start time
of that block. The Cogent program would play the name-
call stimuli through loudspeakers during the task block as
required and then display an END message when the task
time limit was over. 再次, the research team member

数字 2. Experimental setup. (A) Two actors performed in the recording space, watched by an audience including the other actors and the research
team. A laptop and a speaker coordinated by one research team member provided instructions and timings. (乙) Each actor wore an fNIRS system,
full-body motion capture suite, and a physiological monitoring belt around the chest underneath his/her clothes capturing the HR and BR.

2220

认知神经科学杂志

体积 34, 数字 12

D

w
n

A
d
e
d

F
r


H

t
t

p

:
/
/

d

r
e
C
t
.


t
.

e
d

/
j

/


C
n
A
r
t

C
e

p
d

F
/

/

/

3
4
1
2
2
2
1
5
2
0
5
6
5
2
2

/

/
j


C
n
_
A
_
0
1
9
1
2
p
d

.

F


y
G

e
s
t

t


n
0
8
S
e
p
e


e
r
2
0
2
3

数字 3. fNIRS optodes placement and channel configuration (A). Light sources and detectors are indicated by red and blue circles, 分别.
这 22 measurement channels covering the pFC are shown as green circles. fNIRS ROIs (乙). R-IFG consists of Channels 1, 8, 和 16. 正确的
dorsolateral prefrontal cortex (R-DLPFC) consists of Channels 2, 9, 和 17. R-FPC consists of Channels 3, 10, 和 18. mPFC consists of Channels 4, 11,
12, 和 19. Left lateral frontopolar cortex (L-FPC) consists of Channels 5, 13, 和 20. L-DLPFC consists of Channels 6, 14, 和 21. Left inferior frontal
gyrus (L-IFG) consists of Channels 7, 15, 和 22. Whole pFC consists of the average of all the ROIs.

would tell the actors to stop and prepare for the next
block. The person who made the name-call voice record-
ings was the same person who gave the actors their task
instructions during the study, so they were attuned to
that person’s voice throughout.

DATA ANALYSIS

预处理

The fNIRS data were preprocessed using the Homer2
software package (Huppert, Diamond, Franceschini, &
Boas, 2009) following the preprocessing pipeline
described in Pinti, Scholkmann, 汉密尔顿, 伯吉斯, 和
Tachtsidis (2019). We first visually inspected the raw
intensity signals to identify channels with a low signal-to-
noise ratio because of detector saturation, poor optical
coupling (例如, low photon counts, no HR component in
the raw data), or substantial movement artifacts (例如,
because of head movements or actor’s exaggerated
facial expressions particularly affecting the channels over
medial pFC) to exclude from further analyses. 这
excluded channels from the group analyses (n data
套 < 6) are reported in italic in Table 1. Raw data were then converted into changes in optical density (Homer2 function, hmrIntensity2OD). Motion artifacts were identified and corrected using the wavelet- based method (Homer2 function, hmrMotionCorrect- Wavelet; iqr = 1.5; Molavi & Dumont, 2012), and a band-pass filter ([0.005 0.4] Hz; Homer2 function: hmrBandpassFilt) was applied to remove high-frequency noise such as heart- beat and slow drifts. Preprocessed optical density signals were converted into concentration changes of ΔHbO2 and ΔHbR using the modified Beer–Lambert law (Homer2 func- tion, hmrOD2Conc; fixed DPF = 6). To improve the reliabil- ity of our results, we combined the preprocessed ΔHbO2 and ΔHbR into the activation signal by means of the correlation-based signal improvement method (Cui et al., 2012). This allows us to draw conclusions on a signal that includes information about both oxy- and deoxy- hemoglobin with the aim of minimizing false positives in our statistical analyses (Tachtsidis & Scholkmann, 2016). Preprocessed fNIRS activation signals were then used to run two separate analyses: (1) Contrast effects analysis (Contrast Effects Analysis): to localize the brain regions associated with the pro- cessing of hearing one’s own or other’s name during different cognitive load conditions (acting, control/not acting); (2) Brain-to-brain coherence (Brain-to-brain Coherence): to quantify and localize where across-brain synchrony occurs between the two actors during different cogni- tive load conditions (acting, control/not acting). HR and BR, and acceleration signals were used to com- pute interpersonal synchrony at the physiological and behavioral levels, respectively (Physiological and Behav- ioral Interpersonal Synchrony). This was done to investi- gate whether synchronization of heart and BRs and movements occurs between the two actors during natu- ralistic social interactions under different cognitive loads (acting, control/not acting). Contrast Effects Analysis First-level statistical analysis was carried out using a channel-wise general linear model (GLM; Friston et al., 1994) to fit the fNIRS activation signals, down-sampled to 1 Hz, using the SPM for fNIRS toolbox (Tak, Uga, Flandin, Dan, & Penny, 2016). For each participant, the design matrix included nine regressors modeling the following experimental conditions: (1) walking block; (2) speaking block; (3) Acting cobweb block; (4) acting Titania block; (5) control-self: self-namecall event during Greaves et al. 2221 BA Anatomy Probability Number Data Sets 2 2 2 2 J o u r n a l o f C o g n i t i v e N e u r o s c i e n c e V o l u m e 3 4 , N u m b e r 1 2 Table 1. MNI Coordinates of the fNIRS Channels MNI Coordinates Ch. Number 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 x 55 44 26 7 −20 −40 −52 63 54 39 18 −11 −34 −50 −60 y 28 52 67 70 68 55 30 16 44 64 73 74 65 46 18 z 30 24 21 21 22 23 29 16 6 4 5 5 3 7 44 - pars opercularis, part of BA 45 - pars triangularis BA 45 - pars triangularis BA 46 - Dorsolateral prefrontal cortex 10 - Frontopolar area 10 - Frontopolar area 10 - Frontopolar area 46 - Dorsolateral prefrontal cortex 45 - pars triangularis BA 6 - Pre-Motor and Supplementary Motor Cortex 44 - pars opercularis, part of BA 45 - pars triangularis BA 46 - Dorsolateral prefrontal cortex 10 - Frontopolar area 10 - Frontopolar area 10 - Frontopolar area 10 - Frontopolar area 11 - Orbitofrontal area 45 - Pars triangularis BA 46 - Dorsolateral prefrontal cortex 17 6 - Premotor and supplementary motor cortex 44 - Pars opercularis, part of BA 45 - Pars triangularis BA l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / j / o c n a r t i c e - p d l f / / / 3 4 1 2 2 2 1 5 2 0 5 6 5 2 2 / / j o c n _ a _ 0 1 9 1 2 p d . f b y g u e s t t o n 0 8 S e p e m b e r 2 0 2 3 9 7 <6 <6 <6 9 13 9 8 6 <6 <6 9 10 <6 0.33 0.67 0.26 0.74 0.90 1 0.96 0.89 0.82 0.40 0.44 0.62 0.38 0.81 0.89 0.92 0.78 0.22 0.47 0.53 0.22 0.49 0.27 16 17 18 19 20 21 22 57 48 30 6 −23 −43 −54 35 54 68 71 68 57 37 −4 −10 −9 −9 −10 −9 −4 45 - Pars triangularis BA 46 Dorsolateral prefrontal cortex 47 Inferior gyrus 11 Orbitofrontal area 10 Frontopolar area 11 area 46 gyrus 45 cortex 0.58 0.59 0.38 0.88 0.35 0.64 1 0.54 0.30 0.55 0.21 7 13 6 <6 10 6 <6 l D o w n o a d e d f r o m h t t p : > [Acting-Other + Control-Other];
Contrast 2 – Simple Effect of name calling while acting:

(Acting-Self ) > (Acting-Other);

Contrast 3 – Simple Effect of name calling while not

acting: (Control-Self ) > (Control-Other);

Contrast 4 – Acting-name calling interaction: [Acting-
Self > Acting-Other] > [Control-Self > Control-
其他];

Contrast 5 – Main Effect of acting: [Acting-Self + Acting-

其他] > [Control-Self + Control-Other].

At the group level, only channels with at least good data
from six participants were considered for the channel-
wise t tests (见表 1). As this analysis was mainly
exploratory, there was no correction for multiple compar-
isons and t tests were used instead of mixed-effects
型号; the latter is an appropriate statistical approach
where individual participants contribute multiple times
to the sample. 然而, our data set might not be ame-
nable to this approach because of the small sample size.
We include the results of the corresponding mixed-
models analysis in the Appendix (Table A1) as a recom-
mendation for future works with larger samples of
参与者.

Brain-to-brain Coherence

第一的, fNIRS data channels were grouped into ROIs based
on probabilistic anatomical locations listed in Table 1.
Seven ROIs were formed in accordance with the most
probable anatomical region of each channel, thus resulting
in six ROIs consisting of three data channels and one ROI
consisting of four data channels (Figure 3B). The seven
ROIs are as follows: right inferior frontal gyrus (R-IFG),
right dorsolateral prefrontal cortex, right lateral frontopo-
lar cortex (R-FPC), medial frontopolar cortex, left lateral
frontopolar cortex, left dorsolateral prefrontal cortex
(L-DLPFC), and left inferior frontal gyrus.

The preprocessed channel fNIRS activation signals were
averaged within each ROI to obtain the seven ROI-level
fNIRS time series. Both participants within a dyad had
to have at least one valid channel within an ROI for that
ROI to be used in the analysis for that dyad. ROI-level
fNIRS time series were then also averaged across ROIs

to create an eighth fNIRS signal corresponding to the
whole pFC ROI.

Brain-to-brain coherence was computed between the
fNIRS signal in each of the eight ROIs for each dyad using
wavelet transform coherence ( WTC) implemented in the
wavelet coherence toolbox by (Grinsted, 摩尔, &
Jevrejeva, 2004). 更确切地说, the continuous wavelet
transform is first calculated for each single fNIRS time
系列, where the real part of the transform represents
the amplitude and the imaginary part provides the phase.
The cross-wavelet coherence is then calculated from the
two wavelet transforms as the square of the smoothed
cross-spectrum normalized by the individual smoothed
power spectra (Torrence & Compo, 1998). 因此, WTC
decomposes each actor’s fNIRS time series into frequency
components and highlights the local correlation between
the two fNIRS time series of the dyad in the time–
frequency space (Cui et al., 2012). Interpersonal neural
synchrony as expressed by WTC is thus represented in
the time–frequency space, 那是, at different frequency
成分 ( y axis) and for the whole duration of the
实验 (x axis). 这里, brain-to-brain coherence was
computed between pairs of ROIs of the two actors within
a dyad (例如, Dyad 1: Actor 1 R-IFG with Actor 2 R-IFG). A
flow chart of the procedure is shown in Figure 4.

To estimate the significance of the interpersonal coher-
ence values, we generated pseudodyads and computed
the WTC. Pseudodyads were constructed by taking a real
dyad, splitting one of the actor’s fNIRS time series into
equal halves, and then reconstructing the fNIRS signal by
switching the halves. This means that in a pseudopair,
Actor 1’s fNIRS signal from time (0–20 min) might be com-
pared with Actor 2’s fNIRS signal from (10–20) mins and
(0–10) mins concatenated in that order. This preserves
actor identity and all sample characteristics except the live
interaction itself. Pseudopairs were created within dyad to
ensure that the pseudopair WTC is based on the same
optode locations and channels as the real pair WTC. 这
allowed us to avoid the issue of unevenly distributed miss-
ing data as different channels were excluded for each
actor. For both the real and pseudodyads and for each
ROI and whole pFC, we averaged the WTC values across
时间 (数字 4); two-samples t tests were then used to
compare the average WTC values of each real dyad versus
the corresponding pseudodyad at each frequency. 我们
refer to task frequencies for those frequency components
belonging to the range of our task timing, 那是, ∼0.007–
0.2 赫兹; we refer to frequencies above ( f > 0.02 赫兹) 或者
以下 ( F < 0.007 Hz) this range as high frequencies and low frequencies, respectively. Physiological and Behavioral Interpersonal Synchrony Similarly, to the procedure described in the Brain-to-brain Coherence section and Figure 4, WTC was computed between the HR ( WTCHR), BR ( WTCBR), and acceleration 2224 Journal of Cognitive Neuroscience Volume 34, Number 12 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / j / o c n a r t i c e - p d l f / / / 3 4 1 2 2 2 1 5 2 0 5 6 5 2 2 / / j o c n _ a _ 0 1 9 1 2 p d . f b y g u e s t t o n 0 8 S e p e m b e r 2 0 2 3 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / j / o c n a r t i c e - p d l f / / / 3 4 1 2 2 2 1 5 2 0 5 6 5 2 2 / / j o c n _ a _ 0 1 9 1 2 p d . f b y g u e s t t o n 0 8 S e p e m b e r 2 0 2 3 Figure 4. Brain-to-brain coherence analysis flow chart. ROI-level fNIRS signals from both actors within the same dyads are used to compute the WTC of the real dyads. Pseudodyads are then created by taking a real dyad, splitting one of the actor’s fNIRS time series into equal halves, and then reconstructing the fNIRS signal by switching the halves. WTC is then calculated from the resulting pseudodyad’s fNIRS signals. To compare the real dyads’ brain-to-brain coherence with the real dyads’ brain-to-brain coherence, WTCs data are averaged across time (x axis in the WTC spectrogram), thus obtaining WTC values at each frequency for both the real and pseudodyads. Two-samples t tests can then be used to compare the resulting WTC values of real dyads versus the pseudodyads at each frequency. The process is repeated for each ROI. (WTCACC) signals of the two actors. This was done for each dyad across the whole experimental session. To estimate the significance of the physiological and behavioral inter- personal coherence values, we generated pseudodyads following the same methodology of the brain-to-brain coherence (i.e., taking a real dyad, splitting one of the actor’s heart rate or BR or acceleration time series into equal halves, and then reconstructing the signal by switching the halves) and computed the WTC on the pseudodyads. For WTCHR, WTCBR, and WTCACC of both the real and pseudodyads, we averaged the WTC values across time; two-samples t tests were then used to compare the aver- age WTCHR, WTCBR, and WTCACC values of each real dyad versus the corresponding pseudodyad at each frequency. We refer to task frequencies for those frequency components belonging to the range of our task timing, that is, ∼0.007–0.2 Hz; we refer to frequencies above ( f > 0.02 赫兹) 或以下 ( F < 0.007 Hz) this range as high frequencies and low frequencies, respectively. RESULTS Contrast Effect Analysis In this section, we present the results of the group-level GLM analysis carried out on the fNIRS activation signals of each actor individually. In particular, our design matrix included nine regressors (see Preprocessing section), which were used to fit the fNIRS data and estimate the β values for each actor. The β values were used to test how hearing our own name or another name modulates activity Greaves et al. 2225 in pFC while acting or not acting. t Test activation maps resulting from the contrasts listed in the Preprocessing section are shown in Figure 5. Group averaged β values of the significant channels ( p < .05) of the compared con- ditions are reported as well. The corresponding results using multilevel modeling for the group analysis are reported in Table A1 in the Appendix. We did not find a significant main effect of name-calling (Figure 5A) nor a simple effect of name-calling while acting (Figure 5B) when actors heard their partner’s name with respect to their own name. Whilst Channel 7 was close to significance for the main effect of name-calling contrast in the GLM analysis (Figure 5A), the mixed-model analysis revealed a significant effect (Table A1 in the Appendix). However, when participants were not acting, their mPFC did respond to hearing their own names. Specifically, hear- ing self-name compared with other-name lead to increased signal in their mPFC (Channel 20, Figure 5C; see Table 1) while not acting. In addition, L-DLPFC significant activity showed an interaction effect between the task and name factors (Figure 5D). This result was confirmed by the mixed-model analysis (Table A1 in the Appendix) that unveiled also a new significant interaction in Channel 8. In this region, there was a strong response to self-name l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / j / o c n a r t i c e - p d l f / / / 3 4 1 2 2 2 1 5 2 0 5 6 5 2 2 / / j o c n _ a _ 0 1 9 1 2 p d . f b y g u e s t t o n 0 8 S e p e m b e r 2 0 2 3 Figure 5. t Values maps for the five contrasts computed on the actors’ fNIRS activation signals. Gray channels indicate the excluded channels (i.e., good data from less than six participants). Significant channels ( p < .05) at the uncorrected level are indicated by white squares. Group averaged β values of the significant channels across the conditions of interest are included in the bar plots. 2226 Journal of Cognitive Neuroscience Volume 34, Number 12 during control tasks but a suppression to self-name when acting, with less sensitivity to other-name. Finally, we did not find a significant main effect of acting (Figure 5E). Brain-to-brain Coherence The real versus pseudodyads comparison revealed statisti- cally significant ( p < .05; FDR corrected for multiple com- parisons) differences in terms of brain-to-brain synchrony as indicated by WTC (Figure 6). Results are shown in Figure 6 where the black dots represent the frequency at which the real (red line) and the pseudodyads (blue line) group-average WTC are significantly different for each ROI. In particular, we have focused on the frequency range up to ∼0.1 Hz, which includes reasonable frequency compo- nents associated with hemodynamic changes. The mPFC and left pFC ROIs were excluded from this plot as less than six dyads provided good data less than six dyads. The brain-to-brain coherence of the real dyads (Figure 6, red line) was significantly higher than the brain-to-brain coherence of the pseudodyads (Figure 6, blue line) in all the ROIs, meaning that significant across-brain syn- chrony occurred between the actors. This can be mostly observed in the task frequency range (∼0.007–0.2 Hz, i.e., the gray shaded areas in Figure 6) in the right pFC ROIs as marked by the black arrows. There are also effects at low frequencies ( f < 0.007 Hz), possibly related to phys- iological processes. Physiological and Behavioral Interpersonal Synchrony To assess if there was a statistically significant coherence between the actors at the physiological and behavioral level, two-samples t tests were used to compare the aver- age WTCHR, WTCBR, and WTCACC values of each real dyad versus the corresponding pseudodyad for each frequency component. Results are presented in Figure 7. Also here, l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / j / o c n a r t i c e - p d l f / / / 3 4 1 2 2 2 1 5 2 0 5 6 5 2 2 / / j o c n _ a _ 0 1 9 1 2 p d . f b y g u e s t t o n 0 8 S e p e m b e r 2 0 2 3 Figure 6. Group results real dyads versus pseudodyads brain-to-brain coherence for each valid ROI. The red and blue lines represent the group- averaged WTC for the real dyads and pseudodyads, respectively, at all frequencies. The frequencies at which there is a statistically significant difference ( p < .05) in the brain-to-brain WTC between the real dyads and the pseudodyads are marked with black dots. Gray shaded areas highlight the task-frequency range (i.e., 45–120 sec / ∼0.007–0.2 Hz). Arrows A and B indicate significant brain WTC in the task frequency band (0.020- to 0.021-Hz range) where there is a lack of matched WTC in the heartbeat and breathing data. Greaves et al. 2227 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / j / o c n a r t i c e - p d l f / / / 3 4 1 2 2 2 1 5 2 0 5 6 5 2 2 / / j o c n _ a _ 0 1 9 1 2 p d . f b y g u e s t t o n 0 8 S e p e m b e r 2 0 2 3 Figure 7. Group results real dyads versus pseudodyads physiological coherence for HR (A) and BR (B) WTC, and behavioral coherence for acceleration (C) WTC. The red and blue lines represent the group-averaged WTC for the real dyads and pseudodyads, respectively, at all frequencies. The frequencies at which there is a statistically significant difference ( p < .05) in WTCHR, WTCBR, and WTCACC between the real dyads and the pseudodyads are marked with black dots. Gray shaded areas highlight the task-frequency range (i.e., 45–120 sec / ∼0.007–0.2 Hz). we have focused on the frequency range up to ∼0.1 Hz, which includes frequency components that could be asso- ciated with hemodynamic changes in the fNIRS data. We found that real dyads had significantly higher ( p < .05) WTCHR (Figure 7A, black dots), WTCBR (Figure 7B, black dots), and WTCACC (Figure 7C, black dots) with respect to the pseudodyads at the different frequency bands. DISCUSSION Almost nothing is known about the neural mechanisms that occur when people take on a different role such as actors in the theater and how they coordinate their movements with others as part of a performance. Here, we used fNIRS in conjunction with physiological record- ings and motion capture to examine brain activity patterns in pairs of actors rehearsing Shakespeare. We found that fNIRS was able to record pFC activity in actors while they rehearsed an extract of a play for live stage performance with differences found in pFC regions between experi- mental and control conditions. We also showed that multimodal data recorded in naturalistic settings can be used to quantify interpersonal coordination of brain, behavior, and physiology across pairs of actors. We discuss each of these results in turn and then consider the techno- logical limitations on this kind of research and how they may be overcome in future to enable new research in theater-neuroscience. Methodological Advances The primary aim of this article was to determine if it is pos- sible to record meaningful signals from the brains of actors as they rehearse a piece, using wearable fNIRS equipment. In this, we faced several challenges, including (a) design- ing a cognitive task that was feasible and meaningful, (b) capturing fNIRS signals during complex movements, (c) capturing body movements and physiological signals to allow appropriate interpretation of the fNIRS, and (d) find- ing appropriate methods to analyze this complex data set. We believe that our study succeeds in showing that these challenges can be tackled and that there are ways in which to capture some of the real-world richness of theater and acting with brain imaging devices. This builds on previous work that has been recorded from musicians (Omigie et al., 2015) and dancers (Calvo-Merino et al., 2008) extending performance-neuroscience to the domain of theater. A major challenge in real-world neuroscience lies in designing experiments that allow participants to engage in complex naturalistic tasks but still have enough experi- mental control for a robust analysis. Here, we opted to impose external events on an ongoing sequence of com- plex actions to balance experimental control with real- world behavior. That is, we could allow actors to perform their normal rehearsal while we imposed “name call” events and then we can analyze the data to identify brain responses to this socially meaningful stimulus. 2228 Journal of Cognitive Neuroscience Volume 34, Number 12 To capture fNIRS signals and physiological signals dur- ing complex actions, we used two wearable Shimadzu LightNIRS devices in conjunction with physiological mon- itoring belts, wearable motion capture, and video record- ing. We developed methods to synchronize signals across the equipment and made extensive use of motion correc- tion in our fNIRS data analysis. There was still some data loss, but we were able to capture enough data to complete a meaningful analysis. We demonstrate the application of two different analysis methods for this data set. First, an event-related GLM approach, similar to traditional models of fMRI data (Penny, Friston, Ashburner, Kiebel, & Nichols, 2006), was used to test if the pFC responds to name-calls. Second, a wavelet coherence approach was used to track patterns of similarity across brains, physiology, and behav- ior. Whereas this method has been used on separate data sets before (e.g., Hirsch et al., 2021; Quer, Daftari, & Rao, 2016), few studies have applied wavelet coherence across multiple data modalities in the same participants. Our results show that it is feasible to do this, and we discuss the effects we find in different frequency bands in more detail below. Overall, we believe that our work provides a proof-of-principle for the application of fNIRS to the domain of theater and the development of new experi- mental designs and analysis approaches for these data. The Acting Self Our cognitive intervention in this study was a name-call task where participants hear their own name or a control name when acting or not acting. Previous data suggest that acting might lead to a suppression of the sense of self (Brown et al., 2019). We also know that the pFC is engaged by self-related concepts (Northoff et al., 2006) and responds to hearing one’s own name (Imafuku et al., 2014; Kampe et al., 2003). Our results are in line with these earlier studies. We find that, during control tasks, actors were more responsive to their own name instead of their partner’s name, and this effect was found in Channel 20, which was one of our closest good channels to mPFC. Unfortunately, data quality in most medial channels was poor, and these could not be analyzed. This may have been caused by exaggerated facial expressions with conse- quent eyebrow movements made by the actors while act- ing. Frowning and eyebrow-raising is in fact a source of motion artifact in fNIRS signals (Noah et al., 2020; Yücel, Selb, Boas, Cash, & Cooper, 2014) and may be hard to avoid in naturalistic social interactions. The positive response to self-name calling in Channel 20 is consistent with prior literature showing self-name prioritization (Gronau et al., 2003; Shapiro et al., 1997) and suggests that fNIRS is able to detect a self-name response in typical adults. We go beyond these tasks and show that self-name responses can be seen even when participants are engaged in other cognitive tasks such as walking around a space and speaking lines from Shakespeare. Both of these are complex coordination tasks but did not require participants to be in their character roles specific to the play that was being rehearsed. We further found that when actors were engaged in the acting task, this effect was absent. In particular, Channel 6 in L-DLPFC showed an interaction between task and name-calling, such that there was a response to one’s own name during control tasks, but this was suppressed during acting. This pattern is consistent with the hypoth- esis that acting involves a suppression of the sense of self as (Brown et al., 2019) showed that, when participants in MRI are required to respond as if they were Juliet (or Romeo), they showed reduced activity of the mPFC and SFG (Kampe et al., 2003). Both the medial and lateral pFC have previously been associated with mentalizing, including imagination (Dupre, Luh, & Spreng, 2016; Trapp et al., 2014) and pretense ( Whitehead, Marchant, Craik, & Frith, 2009; German, Niehaus, Roarty, Giesbrecht, & Miller, 2004). Our findings are consistent with pFC activity findings showing that it is effective to use wearable brain imaging technology rather than laboratory-based technol- ogy such as fMRI to investigate the sense of self in this cre- ative context. We do not know the technique the actors used to become their Midsummer Night’s Dream character, but their technique would likely involve an in-depth knowledge of the character in relation to other characters (Stanislavsky & Hapgood, 1949). Actors must use their imagination paired with this knowledge to build a profile for the character that spans before and after the events of the play (Kogan & Kogan, 2009). In this particular piece, the actors are performing short excerpts of Shakespeare that have been designed to be engaging for children with autism (Hunter, 2014) and the social dynamic between the two actors involved in each scene is a high priority. Both during our rehearsals and in Flute Theatre’s work in gen- eral, the actors often multirole, or take on several different characters during the course of a performance. This means all actors have experienced the pieces from all different points of view and do not have a strong link to one partic- ular character (e.g., Titania vs. Cobweb). This provided us with more flexibility for data collection in the current study because all actors could perform in all roles but also meant that we could not examine neural mechanisms of develop- ing an expertise in performing one particular character. The question of how an actor’s sense of self and brain activity patterns change when the actor delves into a single role over an extended period is an important one for future research, and claims can only be made upon repli- cation and adaption of this study. Coherence across Brains and Bodies Our multilevel hyperscanning configuration allowed us to explore methods to evaluate if interpersonal coordination occurs between pairs of actors while engaged in a dynamic task involving acting. We investigated the feasibility of using wavelet coherence analysis between the fNIRS Greaves et al. 2229 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / j / o c n a r t i c e - p d l f / / / 3 4 1 2 2 2 1 5 2 0 5 6 5 2 2 / / j o c n _ a _ 0 1 9 1 2 p d . f b y g u e s t t o n 0 8 S e p e m b e r 2 0 2 3 signals (brain-to-brain coherence), heart and respiration rate signals (physiological coherence), and acceleration data (behavioral coherence) of both actors to assess interpersonal synchrony in naturalistic experiments. In particular, to disentangle whether the observed coher- ence patterns are meaningfully different to chance, we compared the wavelet coherence values of real dyads versus pseudodyads. In terms of brain-to-brain coherence, we found statisti- cally significant ( p < .05) interpersonal synchrony in our task frequency range (Figure 6) in R-IFG and R-FPC and at low frequencies across all ROIs. The coherence in the task frequency range further confirms the involvement of right prefrontal regions in coordinating one’s own behavior with a partner. This could be understood in terms of the mutual prediction model of interpersonal coordination (Hamilton, 2021; Kingsbury et al., 2019). Coherence at low frequency ranges is harder to interpret. It could be related to the switches from one task to another, which involve substantial changes in interpersonal coordination and motor processing. However, very low frequency effects in fNIRS data are typically related to physiological changes such as autonomic regulations (Pinti, Cardone, & Merla, 2015). In addition, the very low frequency range might not be very informative as the cutoff frequencies of the high-pass filters typically used in fNIRS studies (Pinti et al., 2019) and in this work as well (i.e., 0.005 Hz) atten- uate the very low frequency components of the fNIRS signal. This suggests that the brain-to-brain coherence that we found at low frequencies might not be related to cog- nitive processing. Thanks to our multimodal data set, we were able to further investigate if any coherence occurred in the phys- iological signals. We found significant interpersonal syn- chrony especially in terms of HR (Figure 7A) and, to a lesser extent, in BR (Figure 7B). Interpersonal synchrony was significantly higher in the real dyads compared with pseudodyads in agreement with previous studies. This suggests that synchrony of HR and breathing is higher in real dyads because of their interaction and the coor- dination of their behaviors. This is also shown by signif- icant coherence in their movements at all frequencies (Figure 7C), given that our task involved movements with different time dynamics. The finding that HRs, breathing, and behavior are coordinated is consistent with a large number of previous studies documenting interpersonal synchrony in some of these measures (Fusaroli et al., 2016; Helm et al., 2012; Konvalinka et al., 2011). A critical question is then—how do the different types of interpersonal coordination relate to one another? For example, a simple model might say that coordination of breathing (driven by speaking in a conversational pattern) might cause their HR and brain activity to coordinate. If that were the case, we would expect to see coordination of breathing at task-related frequencies in addition to heart and brain coordination at these frequencies. The lack of breathing coordination in the task-frequency band speaks against this model. A second model might suggest that movement coordination as the participants walk and interact then drives coordination of heartbeats (because some actions are more or less energetic) and of brain activ- ity. This is a plausible explanation for the low-frequency effects. Here, we see robust coordination in both acceler- ation and HR, together with a global brain coordination effect across the whole of the pFC. The indiscriminate nature of this effect suggests that the brain-to-brain coher- ence at the low frequencies may be of physiological origin and not task-related, and mostly driven by HR changes. However, this explanation may not apply so well to the brain-coherence changes in the task-related frequencies in R-IFG and R-FPC (see black arrows in Figure 6A and 6B). Coherence here is in the 0.020- to 0.021-Hz range (50-sec period) but only in these two brain regions and not in the left inferior frontal gyrus or DLPFC. The same frequency range shows changes in acceleration but not in HR or breathing, although we must be cautious as the absence of a statistical effect cannot be strongly interpreted. This pattern of data suggests it is possible that the effects in R-IFG and R-FPC highlighted with arrows in Figure 6 may reflect the coordination of cognitive processes such as mutual prediction (Kingsbury et al., 2019). However, it is also possible that there are physiological effects here that we have not captured (Tachtsidis & Scholkmann, 2016). These are preliminary results on a small sample, and further study would be needed to confirm this. Our analysis highlights some interesting frequency bands to examine in future work and that this analytical frame- work can be used to quantify multilevel interpersonal synchrony. Taken together, our preliminary results highlight that it is possible to gather multimodal hyperscanning data and investigate social interactions by means of interpersonal synchrony in an ecological setting. By extending this approach at multiple levels (brain, behavior, physiology), it will be possible to disentangle the factors that drive inter- personal coherence and gain a better understanding of the causes of interpersonal coherence. Limitations The study reported here represents the first time that fNIRS has been used to quantify the neural processes involved in acting while participants are engaged in a dynamic performance. As such, it provides a proof-of- principle that this kind of research can be performed and opens the way to future work, but there are also sev- eral limitations to the project. First, we worked with a group of just six actors who performed in repeated ses- sions, giving us a total of 19 data collection sessions. This is a small sample size for neuroimaging research. With the same participants providing data repeatedly, multilevel modeling might seem appropriate, but unfortunately, the data set was too small for this approach to be viable, 2230 Journal of Cognitive Neuroscience Volume 34, Number 12 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / j / o c n a r t i c e - p d l f / / / 3 4 1 2 2 2 1 5 2 0 5 6 5 2 2 / / j o c n _ a _ 0 1 9 1 2 p d . f b y g u e s t t o n 0 8 S e p e m b e r 2 0 2 3 and models sometimes did not converge. Future studies with larger sample sizes could use multilevel data analysis methods. Second, our strict data quality controls meant that we lost data for several sessions, and we were not able to ana- lyze the medial prefrontal channels because we had less than six good data sets available. Future studies could use acting performances where actors are seated and per- form fewer movements to obtain better data, but that would reduce the dynamic interactivity of the perfor- mance. In addition, we generated pseudodyads with a within-dyad approach, that is, by swapping the first and second halves of the signals for one participant within the same dyad. This could alter the temporal dynamics of the pseudosignals and lose important trends in the data during the whole experimental session that are not directly related to the social interaction, such as slow drifts arising from psychophysiological changes in the actors (e.g., the actor getting tired), and thus deflate the correla- tion in the pseudopairs. In our future work, we plan to adopt a between-dyads approach (i.e., correlate one actor’s signal with the signal of an actor from another dyad), minimizing the problem of missing data across dyads, especially in terms of variable fNIRS channels exclu- sion. Finally, given our small sample size, we were not able to apply a correction for multiple comparisons on our name-calling contrasts. However, the cross-brain and physiological coherence results that include data from the whole session rather than single events did pass a cor- rection for multiple comparisons. Larger sample sizes and methods to reduce motion artifacts should mitigate these issues in future studies. In any neuroimaging study of natural behavior, there is the challenge of collecting detailed data on the behavior itself to understand the relationships between brain and behavior. Here, we used motion trackers and video to cap- ture whole body movement, but we did not have access to eye trackers or face-cameras to capture gaze behavior and facial action cues. Our past experience is that eye-trackers do not provide robust data in the context of these dynamic interactions, and increasing the number of devices worn by each actor also makes it more challenging for them to perform fully. However, in future work, it will be useful to obtain more detailed measures of behavior to understand the social dynamics that mediate coherence between brains (Hamilton, 2021). Broader Implications Applying cognitive neuroscience to the real world requires overcoming many different challenges. These include the challenge of designing experiments that balance freedom and realism against experimental control, the technical challenges of data collection, and the challenge of analyz- ing complex multimodal data sets. As a proof-of-principle paper, our study demonstrates that it is possible to over- come these challenges and opens the way for future studies of the neuroscience of theater and other dynamic complex social situations. We suggest that it could be valu- able to study in more detail how acting might change the sense of self, both as an actor develops a character over a series of rehearsals and in the longer term when people attend drama school and learn to express a character through drama. It would also be interesting to examine the relationship between drama training for adults and theater workshops for children with autism (Hunter, 2014), which may function as a dynamic and engaging way to let children practice new social skills. Our work examining interpersonal coherence has broader relevance outside the study of theater. This topic is of growing interest across social neuroscience, with many studies reporting coherence at the level of brain (Hirsch et al., 2018) or heartbeat (Fusaroli et al., 2016) or behavior (Richardson & Dale, 2005). However, few studies have examined all these dimensions together. We show that it is possible to collect multimodal data on interpersonal coordination and use a wavelet coherence approach to analyze several different modalities. This means it is possible to compare the frequency bands where coherence is seen across modalities, and potentially draw conclusions about the origins of interpersonal coher- ence. In particular, we highlight effects at a frequency of 0.020–0.021 Hz where there is coherence in some brain regions as a potential focus for future studies. Future stud- ies of interpersonal coordination should record across multiple modalities to get a clearer understanding of the origins of these patterns of coordination and their causes. In fact, wavelet coherence is a powerful method that can be applied to understand the relationship between signals from different modalities. Future works can investigate what drives the neural coupling computing WTC between the fNIRS and physiological signals (e.g., fNIRS vs. respira- tion), or between physiological signals (e.g., HR vs. acceleration). Last, this article demonstrates the value of interdisci- plinary research for the purpose of deepening our understanding of human social behavior. Our project brought together a large team from theater, engineering, psychology, and neuroscience to develop research in a novel area. We hope this can be a blueprint for future research in these fields and will pave the way for a dee- per understanding of how theater and acting can trans- form our social selves and our understanding of the social world. Greaves et al. 2231 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / j / o c n a r t i c e - p d l f / / / 3 4 1 2 2 2 1 5 2 0 5 6 5 2 2 / / j o c n _ a _ 0 1 9 1 2 p d . f b y g u e s t t o n 0 8 S e p e m b e r 2 0 2 3 APPENDIX Table A1. Group-level Results of the Mixed-model Analysis Computed on the fNIRS Beta Values Channel Contrast F Value 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 1 2 3 4 5 *6 7 **8 9 10 11 12 13 14 15 Acting > 控制

Acting > control

Acting > control

Acting > control

Acting > control

Acting > control

Acting > control

Acting > control

Acting > control

Acting > control

Acting > control

Acting > control

Acting > control

Acting > control

Acting > control

Acting > control

Acting > control

Acting > control

Acting > control

Acting > control

Acting > control

Acting > control

Interaction

Interaction

Interaction

Interaction

Interaction

Interaction

Interaction

Interaction

Interaction

Interaction

Interaction

Interaction

Interaction

Interaction

Interaction

1.0703

0.6338

0.1325

2.7557

0.0208

2.5544

1.986

3.2695

0.5533

0.2489

0.036

1.5912

0.1183

0.5493

0.8279

0.9206

0.7868

0.7868

1.0479

1.0479

1.3097

0.0018

1.936

0.3128

0.1932

2.643

7.1337

10.802

0.5697

6.2948

0.1817

0.4125

0.0044

3.7178

2.0169

0.3421

6.9552

df

81.184

63.158

25

24

24

82.473

121.41

82.473

72.208

52.146

24

24

82.179

91.676

34.052

63.385

121.7

121.7

15

15

53.23

34.077

81.184

63.158

25

24

24

82.473

121.41

82.473

72.208

52.146

24

24

82.179

91.676

34.052

p Value

Included Channel

.304

.429

.719

.110

.887

.114

.161

.074

.459

.620

.851

.219

.732

.461

.369

.341

.377

.377

.322

.322

.258

.967

.168

.578

.664

.117

.013

.001*

.452

.014**

.671

.524

.948

.066

.159

.560

.013

是的

是的

不 (n < 6) No (n < 6) No (n < 6) Yes Yes Yes Yes Yes No (n < 6) No (n < 6) Yes Yes No (n < 6) Yes Yes Yes No (n < 6) Yes Yes No (n < 6) Yes Yes No (n < 6) No (n < 6) No (n < 6) Yes Yes Yes Yes Yes No (n < 6) No (n < 6) Yes Yes No (n < 6) 2232 Journal of Cognitive Neuroscience Volume 34, Number 12 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / j / o c n a r t i c e - p d l f / / / 3 4 1 2 2 2 1 5 2 0 5 6 5 2 2 / / j o c n _ a _ 0 1 9 1 2 p d . f b y g u e s t t o n 0 8 S e p e m b e r 2 0 2 3 Table A1. (continued ) Channel Contrast F Value 16 17 18 19 20 21 22 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 Interaction Interaction Interaction Interaction Interaction Interaction Interaction Self name > 其他

Self name > other

Self name > other

Self name > other

Self name > other

Self name > other

0.692

0.0903

0.0903

0.1281

0.1281

0.1918

16.5215

0.0341

0.2547

0.0024

0.0614

0.1203

0.2964

Self name > other

4.6713

Self name > other

Self name > other

Self name > other

Self name > other

Self name > other

Self name > other

Self name > other

Self name > other

Self name > other

Self name > other

Self name > other

Self name > other

Self name > other

Self name > other

Self name > other

0.7552

0.1531

0.0213

1.59

7.932

0.0091

0.8987

0.137

2.9181

1.2649

1.2649

0.8346

0.8346

0.0504

2.0822

df

63.385

121.7

121.7

15

15

53.23

34.077

81.184

63.158

25

24

24

82.473

121.41

82.473

72.208

52.146

24

24

82.179

91.676

34.052

63.385

121.7

121.7

15

15

53.23

34.077

p Value

Included Channel

.409

.764

.764

.725

.725

.663

.000

.854

.616

.961

.806

.732

.588

.033*

.387

.697

.884

.219

.010

.924

.346

.714

.092

.263

.263

.375

.375

.823

.158

是的

是的

是的

不 (n < 6) Yes Yes No (n < 6) Yes Yes No (n < 6) No (n < 6) No (n < 6) Yes Yes Yes Yes Yes No (n < 6) No (n < 6) Yes Yes No (n < 6) Yes Yes Yes No (n < 6) Yes Yes No (n < 6) The model was fit in R studio using lmer with the model beta ∼ act_no * self_name + (1|pptno), and results are reported from anova1mer. Channels with less than six contributing data sets were excluded from our main analysis. They are reported here for completeness but are reported in italic, and we do not highlight significant results in these channels. Boldface highlights significant effects: Single asterisks indicate channels showing a significant effect that replicate our primary analysis as illustrated in Figure 5. Double asterisks mark other channels with significant effects. Reprint requests should be sent to Dwaynica A. Greaves, Insti- tute of Cognitive Neuroscience, University College London, Alexandra House, WC1N 3AZ, London, United Kingdom, or via e-mail: dwaynica.greaves.20@ucl.ac.uk. Data Availability Statement There is no IRB approval for data posting and sharing due to the small sample of unique participants. Author Contributions Dwaynica A. Greaves: Conceptualization; Data curation; Investigation; Methodology; Project administration; Writing—Original draft; Writing—Review & editing. Paola Pinti: Conceptualization; Data curation; Formal analysis; Investigation; Methodology; Project administration; R e s o u r c e s ; S o f t w a r e ; V a l i d a t i o n ; V i s u a l i z a t i o n ; Writing—Original draft; Writing—Review & editing. Sara Din: Conceptualization; Data curation; Investigation; Greaves et al. 2233 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / j / o c n a r t i c e - p d l f / / / 3 4 1 2 2 2 1 5 2 0 5 6 5 2 2 / / j o c n _ a _ 0 1 9 1 2 p d . f b y g u e s t t o n 0 8 S e p e m b e r 2 0 2 3 Methodology; Resources; Validation; Writing—Original draft. Robert Hickson: Data curation; Formal analysis; Methodology; Software; Writing—Original draft. Mingyi Diao: Data curation; Formal analysis; Methodology; Software; Writing—Original draft. Charlotte Lange: Data curation; Investigation; Methodology; Resources; Validation. Priyasha Khurana: Data curation; Formal analysis. Kelly Hunter: Con- ceptualization; Methodology; Project administration; Resources. Ilias Tachtsidis: Methodology; Resources; Software; Supervision. Antonia F. de C. Hamilton: Con- ceptualization; Data curation; Formal analysis; Funding acquisition; Investigation; Methodology; Project adminis- tration; Resources; Software; Supervision; Validation; Visualization; Writing—Original draft; Writing—Review & editing. Diversity in Citation Practices Retrospective analysis of the citations in every article pub- lished in this journal from 2010 to 2021 reveals a persistent pattern of gender imbalance: Although the proportions of authorship teams (categorized by estimated gender iden- tification of first author/last author) publishing in the Jour- nal of Cognitive Neuroscience ( JoCN) during this period were M(an)/M = .407, W(oman)/M = .32, M/ W = .115, and W/ W = .159, the comparable proportions for the arti- cles that these authorship teams cited were M/M = .549, W/M = .257, M/ W = .109, and W/ W = .085 (Postle and Fulvio, JoCN, 34:1, pp. 1–3). Consequently, JoCN encour- ages all authors to consider gender balance explicitly when selecting which articles to cite and gives them the oppor- tunity to report their article’s gender citation balance. REFERENCES Astolfi, L., Toppi, J., Ciaramidaro, A., Vogel, P., Freitag, C. M., & Siniatchkin, M. (2020). Raising the bar: Can dual scanning improve our understanding of joint action? Neuroimage, 216, 116813. https://doi.org/10.1016/j.neuroimage.2020 .116813, PubMed: 32276053 Bevilacqua, D., Davidesco, I., Wan, L., Chaloner, K., Rowland, J., Ding, M., et al. (2018). Brain-to-brain synchrony and learning outcomes vary by student–teacher dynamics: Evidence from a real-world classroom electroencephalography study. Journal of Cognitive Neuroscience, 31, 401–411. https://doi .org/10.1162/jocn_a_01274, PubMed: 29708820 Bianchi, L. J., Kingstone, A., & Risko, E. F. (2020). The role of cognitive load in modulating social looking: A mobile eye tracking study. Cognitive Research: Principles and Implications, 5, 44. https://doi.org/10.1186/s41235-020-00242 -5, PubMed: 32936361 Brown, S., Cockett, P., & Yuan, Y. (2019). The neuroscience of Romeo and Juliet: An fMRI study of acting. Royal Society Open Science, 6, 181908. https://doi.org/10.1098/rsos.181908, PubMed: 31032043 Calvo-Merino, B., Jola, C., Glaser, D. E., & Haggard, P. (2008). Towards a sensorimotor aesthetics of performing art. Consciousness and Cognition, 17, 911–922. https://doi.org /10.1016/j.concog.2007.11.003, PubMed: 18207423 Carmody, D. P., & Lewis, M. (2006). Brain activation when hearing one’s own and others’ names. Brain Research, 1116, 153–158. https://doi.org/10.1016/j.brainres.2006.07.121, PubMed: 16959226 Cherry, E. C. (1953). Some experiments on the recognition of speech, with one and with two ears. Journal of the Acoustical Society of America, 25, 975–979. https://doi.org /10.1121/1.1907229 Cross, E. S., Hamilton, A. F., & Grafton, S. T. (2006). Building a motor simulation de novo: Observation of dance by dancers. Neuroimage, 31, 1257–1267. https://doi.org/10.1016/j .neuroimage.2006.01.033, PubMed: 16530429 Cruz-Garza, J. G., Brantley, J. A., Nakagome, S., Kontson, K., Megjhani, M., Robleto, D., et al. (2017). Deployment of mobile EEG technology in an art museum setting: Evaluation of signal quality and usability. Frontiers in Human Neuroscience, 11, 527. https://doi.org/10.3389/fnhum.2017 .00527, PubMed: 29176943 Cui, X., Bryant, D. M., & Reiss, A. L. (2012). NIRS-based hyperscanning reveals increased interpersonal coherence in superior frontal cortex during cooperation. Neuroimage, 59, 2430–2437. https://doi.org/10.1016/j.neuroimage.2011.09 .003, PubMed: 21933717 Dupre, E., Luh, W.-M., & Spreng, R. N. (2016). Multi-echo fMRI replication sample of autobiographical memory, prospection and theory of mind reasoning tasks Background & Summary. Scientific Data, 3, 160116. https://doi.org/10.1038/sdata.2016 .116, PubMed: 27996964 Fishburn, F. A., Murty, V. P., Hlutkowsky, C. O., Macgillivray, C. E., Bemis, L. M., Murphy, M. E., et al. (2018). Putting our heads together: Interpersonal neural synchronization as a biological mechanism for shared intentionality. Social Cognitive and Affective Neuroscience, 13, 841–849. https://doi.org/10.1093 /scan/nsy060, PubMed: 30060130 Friston, K. J., Holmes, A. P., Worsley, K. J., Poline, J.-P., Frith, C. D., & Frackowiak, R. S. J. (1994). Statistical parametric maps in functional imaging: A general linear approach. Human Brain Mapping, 2, 189–210. https://doi.org/10.1002 /HBM.460020402 Fusaroli, R., Bjørndahl, J. S., Roepstorff, A., & Tylén, K. (2016). A heart for interaction: Shared physiological dynamics and behavioral coordination in a collective, creative construction task. Journal of Experimental Psychology: Human Perception and Performance, 42, 1297–1310. https://doi.org /10.1037/xhp0000207, PubMed: 26962844 German, T. P., Niehaus, J. L., Roarty, M. P., Giesbrecht, B., & Miller, M. B. (2004). Neural correlates of detecting pretense: Automatic engagement of the intentional stance under covert conditions. Journal of Cognitive Neuroscience, 16, 1805–1817. https://doi.org/10.1162/0898929042947892, PubMed: 15701230 Goldstein, T. R., & Bloom, P. (2011). The mind on stage: Why cognitive scientists should study acting. Trends in Cognitive Sciences, 15, 141–142. https://doi.org/10.1016/j.tics.2011.02 .003, PubMed: 21398168 Grinsted, A., Moore, J. C., & Jevrejeva, S. (2004). Application of the cross wavelet transform and wavelet coherence to geophysical time series. Nonlinear Processes in Geophysics, 11, 561–566. https://doi.org/10.5194/npg-11-561-2004 Gronau, N., Cohen, A., & Ben-Shakhar, G. (2003). Dissociations of personally significant and task-relevant distractors inside and outside the focus of attention: A combined behavioral and psychophysiological study. Journal of Experimental Psychology: General, 132, 512–529. https://doi.org/10.1037 /0096-3445.132.4.512, PubMed: 14640845 Hamilton, A. F. (2021). Hyperscanning: Beyond the hype. Neuron, 109, 404–407. https://doi.org/10.1016/j.neuron.2020 .11.008, PubMed: 33259804 Hasson, U., Ghazanfar, A. A., Galantucci, B., Garrod, S., & Keysers, C. (2012). Brain-to-brain coupling: A mechanism for 2234 Journal of Cognitive Neuroscience Volume 34, Number 12 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / j / o c n a r t i c e - p d l f / / / 3 4 1 2 2 2 1 5 2 0 5 6 5 2 2 / / j o c n _ a _ 0 1 9 1 2 p d . f b y g u e s t t o n 0 8 S e p e m b e r 2 0 2 3 creating and sharing a social world. Trends in Cognitive Sciences, 16, 114–121. https://doi.org/10.1016/j.tics.2011.12 .007, PubMed: 22221820 Helm, J. L., Sbarra, D., & Ferrer, E. (2012). Assessing cross-partner associations in physiological responses via coupled oscillator models. Emotion, 12, 748–762. https://doi .org/10.1037/a0025036, PubMed: 21910541 Hirsch, J., Noah, J. A., Zhang, X., Dravida, S., & Ono, Y. (2018). A cross-brain neural mechanism for human-to-human verbal communication. Social Cognitive and Affective Neuroscience, 13, 907–920. https://doi.org/10.1093/scan /nsy070, PubMed: 30137601 Hirsch, J., Tiede, M., Zhang, X., Noah, J. A., Salama-Manteau, A., & Biriotti, M. (2021). Interpersonal agreement and disagreement during face-to-face dialogue: An fNIRS investigation. Frontiers in Human Neuroscience, 14, 606397. https://doi.org/10.3389/fnhum.2020.606397, PubMed: 33584223 Holeckova, I., Fischer, C., Morlet, D., Delpuech, C., Costes, N., & Mauguière, F. (2008). Subject’s own name as a novel in a MMN design: A combined ERP and PET study. Brain Research, 1189, 152–165. https://doi.org/10.1016/j.brainres .2007.10.091, PubMed: 18053971 Hunter, K. (2014). Shakespeare’s heartbeat: Drama games for children with autism. Shakespeare’s Heartbeat. https://doi .org/10.4324/9781315747477 Huppert, T. J., Diamond, S. G., Franceschini, M. A., & Boas, D. A. (2009). HomER: A review of time-series analysis methods for near-infrared spectroscopy of the brain. Applied Optics, 48, D280–D298. https://doi.org/10.1364/ao.48.00d280, PubMed: 19340120 Imafuku, M., Hakuno, Y., Uchida-Ota, M., Yamamoto, J.-I., & Minagawa, Y. (2014). “Mom called me!” Behavioral and prefrontal responses of infants to self-names spoken by their mothers. Neuroimage, 103, 476–484. https://doi.org/10.1016 /j.neuroimage.2014.08.034, PubMed: 25175541 Issartel, J., Marin, L., & Cadopi, M. (2007). Unintended interpersonal co-ordination: Can we march to the beat of our own drum? Neuroscience Letters, 411, 174–179. https://doi .org/10.1016/j.neulet.2006.09.086, PubMed: 17123718 Jiang, J., Dai, B., Peng, D., Zhu, C., Liu, L., & Lu, C. (2012). Neural synchronization during face-to-face communication. Journal of Neuroscience, 32, 16064–16069. https://doi.org/10 .1523/JNEUROSCI.2926-12.2012, PubMed: 23136442 Kampe, K. K. W., Frith, C. D., & Frith, U. (2003). “Hey John”: Signals conveying communicative intention toward the self activate brain regions associated with “Mentalizing,” regardless of modality. Journal of Neuroscience, 23, 5258–5263. https://doi.org/10.1523/JNEUROSCI.23-12-05258 .2003, PubMed: 12832550 Kingsbury, L., Huang, S., Wang, J., Gu, K., Golshani, P., Wu, Y. E., et al. (2019). Correlated neural activity and encoding of behavior across brains of socially interacting animals. Cell, 178, 429–446. https://doi.org/10.1016/j.cell.2019.05.022, PubMed: 31230711 Kirk, U., Skov, M., Hulme, O., Christensen, M. S., & Zeki, S. (2009). Modulation of aesthetic value by semantic context: An fMRI study. Neuroimage, 44, 1125–1132. https://doi.org /10.1016/j.neuroimage.2008.10.009, PubMed: 19010423 Kogan, S., & Kogan, H. (2009). The science of acting (1st ed., pp. 1–264). London: Routledge. https://doi.org/10.4324 /9780203874042 Konvalinka, I., Xygalatas, D., Bulbulia, J., Schjodt, U., Jegindo, E.-M., Wallot, S., et al. (2011). Synchronized arousal between performers and related spectators in a fire-walking ritual. Proceedings of the National Academy of Sciences, U.S.A., 108, 8514–8519. https://doi.org/10.1073/pnas.1016955108, PubMed: 21536887 Launay, J., Tarr, B., & Dunbar, R. I. M. (2016). Synchrony as an adaptive mechanism for large-scale human social bonding. Ethology, 122, 779–789. https://doi.org/10.1111/ETH.12528 Martin, A. K., Kessler, K., Cooke, S., Huang, J., & Meinzer, M. (2020). The right Temporoparietal junction is causally associated with embodied perspective-taking. Journal of Neuroscience, 40, 3089–3095. https://doi.org/10.1523 /JNEUROSCI.2637-19.2020, PubMed: 32132264 Molavi, B., & Dumont, G. A. (2012). Wavelet-based motion artifact removal for functional near-infrared spectroscopy. Physiological Measurement, 33, 259–270. https://doi.org/10 .1088/0967-3334/33/2/259, PubMed: 22273765 Noah, J. A., Zhang, X., Dravida, S., Ono, Y., Naples, A., McPartland, J. C., et al. (2020). Real-time eye-to-eye contact is associated with cross-brain neural coupling in angular gyrus. Frontiers in Human Neuroscience, 14, 19. https://doi.org/10 .3389/fnhum.2020.00019, PubMed: 32116606 Northoff, G., Heinzel, A., de Greck, M., Bermpohl, F., Dobrowolny, H., & Panksepp, J. (2006). Self-referential processing in our brain—A meta-analysis of imaging studies on the self. Neuroimage, 31, 440–457. https://doi.org/10.1016 /j.neuroimage.2005.12.002, PubMed: 16466680 Omigie, D., Dellacherie, D., Hasboun, D., George, N., Clement, S., Baulac, M., et al. (2015). An intracranial EEG study of the neural dynamics of musical valence processing. Cerebral Cortex, 25, 4038–4047. https://doi.org/10.1093/cercor /bhu118, PubMed: 24904066 Palumbo, R., Marraccini, M. E., Weyandt, L. L., & Wilder-Smith, O. (2016). Interpersonal autonomic physiology: A systematic review of the literature. Personality and Social Psychology Review, 21, 99–141. https://doi.org/10.1177 /1088868316628405, PubMed: 26921410 Penny, W., Friston, K., Ashburner, J., Kiebel, S., & Nichols, T. (2006). Statistical parametric mapping: The analysis of functional brain images. Elsevier. https://doi.org/10.1016 /B978-0-12-372560-8.X5000-1 Pinti, P., Aichelburg, C., Lind, F., Power, S., Swingler, E., Merla, A., et al. (2015). Using fiberless, wearable fNIRS to monitor brain activity in real-world cognitive tasks. Journal of Visualized Experiments, 106, 53336. https://doi.org/10.3791 /53336, PubMed: 26651025 Pinti, P., Cardone, D., & Merla, A. (2015). Simultaneous fNIRS and thermal infrared imaging during cognitive task reveal autonomic correlates of prefrontal cortex activity. Scientific Reports, 5, 17471. https://doi.org/10.1038/srep17471, PubMed: 26632763 Pinti, P., Scholkmann, F., Hamilton, A., Burgess, P., & Tachtsidis, I. (2019). Current status and issues regarding pre-processing of fNIRS neuroimaging data: An investigation of diverse signal filtering methods within a general linear model framework. Frontiers in Human Neuroscience, 12, 505. https://doi.org/10.3389/fnhum.2018.00505, PubMed: 30687038 Pinti, P., Tachtsidis, I., Hamilton, A., Hirsch, J., Aichelburg, C., Gilbert, S., et al. (2020). The present and future use of functional near-infrared spectroscopy (Fnirs) for cognitive neuroscience. Annals of the New York Academy of Sciences, 1464, 5–29. https://doi.org/10.1111/nyas.13948, PubMed: 30085354 Quer, G., Daftari, J., & Rao, R. R. (2016). Heart rate wavelet coherence analysis to investigate group entrainment. Pervasive and Mobile Computing, 28, 21–34. https://doi.org /10.1016/j.pmcj.2015.09.008 Remland, M. S., Jones, T. S., & Brinkman, H. (1995). Interpersonal distance, body orientation, and touch: Effects of culture, gender, and age. Journal of Social Psychology, 135, 281–297. https://doi.org/10.1080/00224545.1995 .9713958, PubMed: 7650932 Greaves et al. 2235 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / j / o c n a r t i c e - p d l f / / / 3 4 1 2 2 2 1 5 2 0 5 6 5 2 2 / / j o c n _ a _ 0 1 9 1 2 p d . f b y g u e s t t o n 0 8 S e p e m b e r 2 0 2 3 Richardson, D. C., & Dale, R. (2005). Looking to understand: The coupling between speakers’ and listeners’ eye movements and its relationship to discourse comprehension. Cognitive Science, 29, 1045–1060. https://doi.org/10.1207 /s15516709cog0000_29, PubMed: 21702802 Risko, E. F., Richardson, D., & Kingstone, A. (2016). Breaking the fourth wall of cognitive science: Real-world social attention and the dual function of gaze. Current Directions in Psychological Science, 25, 70–74. https://doi.org/10.1177 /0963721415617806 Schilbach, L., Timmermans, B., Reddy, V., Costall, A., Bente, G., Schlicht, T., et al. (2013). Toward a second-person neuroscience. Behavioral and Brain Sciences, 36, 393–414. https://doi.org /10.1017/S0140525X12000660, PubMed: 23883742 Shapiro, K. L., Caldwell, J., & Sorensen, R. E. (1997). Personal names and the attentional blink: A visual “cocktail party” effect. Journal of Experimental Psychology: Human Perception and Performance, 23, 504–514. https://doi.org/10 .1037/0096-1523.23.2.504, PubMed: 9104007 Stanislavsky, K., & Hapgood, E. R. (1949). Building a character (p. 292). London: Bloomsbury Academic. Stephens, G. J., Silbert, L. J., & Hasson, U. (2010). Speaker– listener neural coupling underlies successful communication. Proceedings of the National Academy of Sciences, U.S.A., 107, 14425–14430. https://doi.org/10.1073/pnas.1008662107, PubMed: 20660768 Tachtsidis, I., & Scholkmann, F. (2016). False positives and false negatives in functional near-infrared spectroscopy: Issues, challenges, and the way forward. Neurophotonics, 3, 031405. https://doi.org/10.1117/1.nph.3.3.031405, PubMed: 27054143 Tak, S., Uga, M., Flandin, G., Dan, I., & Penny, W. D. (2016). Sensor space group analysis for fNIRS data. Journal of Neuroscience Methods, 264, 103–112. https://doi.org/10.1016 /j.jneumeth.2016.03.003, PubMed: 26952847 Torrence, C., & Compo, G. P. (1998). A practical guide to wavelet analysis. Bulletin of the American Meteorological Society, 79, 61–78. https://doi.org/10.1175/1520-0477(1998) 079<0061:APGTWA>2.0.一氧化碳;2

Trapp, K., Spengler, S。, Wüstenberg, T。, Wiers, E., Busch, 氮. A。,

& Bermpohl, F. (2014). Imagining triadic interactions
simultaneously activates mirror and mentalizing systems.
Neuroimage, 98, 314–323. https://doi.org/10.1016/j
.neuroimage.2014.05.003, 考研: 24825504

Vlasic, D ., Adelsberger, R。, Vannucci, G。, Barnwell, J。, 总的, M。,

Matusik, W., 等人. (2007). Practical motion capture in
everyday surroundings. ACM Transactions on Graphics, 26,
35. https://doi.org/10.1145/1276377.1276421

Whitehead, C。, Marchant, J. L。, Craik, D ., & Frith, C. D. (2009).
Neural correlates of observing pretend play in which one
object is represented as another. Social Cognitive and
Affective Neuroscience, 4, 369–378. https://doi.org/10.1093
/scan/nsp021, 考研: 19535615

Wilson, 中号. (2002). Six views of embodied cognition. Psychonomic

Bulletin & 审查, 9, 625–636. https://doi.org/10.3758
/BF03196322, 考研: 12613670

Yücel, 中号. A。, Selb, J。, Boas, D. A。, 现金, S. S。, & 库珀, 右. J.
(2014). Reducing motion artifacts for long-term clinical
NIRS monitoring using collodion-fixed prism-based optical
纤维. Neuroimage, 85, 192–201. https://doi.org/10.1016/J
.NEUROIMAGE.2013.06.054, 考研: 23796546

D

w
n

A
d
e
d

F
r


H

t
t

p

:
/
/

d

r
e
C
t
.


t
.

e
d

/
j

/


C
n
A
r
t

C
e

p
d

F
/

/

/

3
4
1
2
2
2
1
5
2
0
5
6
5
2
2

/

/
j


C
n
_
A
_
0
1
9
1
2
p
d

.

F


y
G

e
s
t

t


n
0
8
S
e
p
e


e
r
2
0
2
3

2236

认知神经科学杂志

体积 34, 数字 12Exploring Theater Neuroscience: Using Wearable image
Exploring Theater Neuroscience: Using Wearable image
Exploring Theater Neuroscience: Using Wearable image
Exploring Theater Neuroscience: Using Wearable image
Exploring Theater Neuroscience: Using Wearable image
Exploring Theater Neuroscience: Using Wearable image
Exploring Theater Neuroscience: Using Wearable image
Exploring Theater Neuroscience: Using Wearable image

下载pdf