ARTÍCULO DE INVESTIGACIÓN

ARTÍCULO DE INVESTIGACIÓN

The Language Network Is Recruited but Not
Required for Nonverbal Event Semantics

Anna A. Ivanova1,2

, Zachary Mineroff1

Rosemary Varley3

, Vitor Zimmerer3
, and Evelina Fedorenko1,2

, Nancy Kanwisher1,2,

un acceso abierto

diario

1Department of Brain and Cognitive Sciences, Instituto de Tecnología de Massachusetts, Cambridge, MAMÁ, EE.UU
2McGovern Institute for Brain Research, Instituto de Tecnología de Massachusetts, Cambridge, MAMÁ, EE.UU
3Division of Psychology and Language Sciences, University College London, Londres, Reino Unido

Palabras clave: resonancia magnética funcional, aphasia, events, semantics, thematic roles, language and thought

ABSTRACTO

The ability to combine individual concepts of objects, propiedades, and actions into complex
representations of the world is often associated with language. Yet combinatorial event-level
representations can also be constructed from nonverbal input, such as visual scenes. Aquí, nosotros
test whether the language network in the human brain is involved in and necessary for semantic
processing of events presented nonverbally. In Experiment 1, we scanned participants with fMRI
while they performed a semantic plausibility judgment task versus a difficult perceptual control
task on sentences and line drawings that describe/depict simple agent–patient interactions. Nosotros
found that the language network responded robustly during the semantic task performed on both
sentences and pictures (although its response to sentences was stronger). De este modo, language regions
in healthy adults are engaged during a semantic task performed on pictorial depictions of events.
But is this engagement necessary? In Experiment 2, we tested two individuals with global
aphasia, who have sustained massive damage to perisylvian language areas and display severe
language difficulties, against a group of age-matched control participants. Individuals with
aphasia were severely impaired on the task of matching sentences to pictures. Sin embargo, ellos
performed close to controls in assessing the plausibility of pictorial depictions of agent–patient
interactions. En general, our results indicate that the left frontotemporal language network is
recruited but not necessary for semantic processing of nonverbally presented events.

INTRODUCCIÓN

Many thinkers have argued for an intimate relationship between language and thought, in fields
as diverse as philosophy (Carruthers, 2002; Davidson, 1975; Wittgenstein, 1961), psicología
(Sokolov, 1972; Vygotsky, 2012; watson, 1920), linguistics (Berwick & Chomsky, 2016;
Bickerton, 1990; Chomsky, 2007; Hinzen, 2013; Jackendoff, 1996), and artificial intelligence
(Brown y cols., 2020; Goldstein & Papert, 1977; Turing, 1950; Winograd, 1976). According to
such accounts, language enables us to access our vast knowledge of objects, propiedades, y
actions—often referred to as semantic knowledge—and flexibly combine individual semantic
units to produce complex situation-specific representations called thoughts. The hypothesis that
language is critical for thought crucially depends on whether or not language is essential for
combinatorial semantic processing: If we can access and combine individual concepts in the
absence of language, that would constitute evidence against the necessity of language in

Citación: Ivanova, A. A., Mineroff, Z.,
Zimmerer, v., Kanwisher, NORTE., Varley, r.,
& Fedorenko, mi. (2021). The language
network is recruited but not required
for nonverbal event semantics.
Neurobiology of Language, 2(2),
176–201. https://doi.org/10.1162
/nol_a_00030

DOI:
https://doi.org/10.1162/nol_a_00030

Supporting Information:
https://doi.org/10.1162/nol_a_00030

Recibió: 4 Septiembre 2020
Aceptado: 7 Enero 2021

Conflicto de intereses: Los autores tienen
declaró que no hay intereses en competencia
existir.

Autores correspondientes:
Anna A. Ivanova
annaiv@mit.edu

Evelina Fedorenko
evelina9@mit.edu

Editor de manejo:
Rik Vandenberghe

Derechos de autor: © 2021 Massachusetts
Institute of Technology. Publicado
bajo una atribución Creative Commons
4.0 Internacional (CC POR 4.0) licencia.

La prensa del MIT

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu
norte
oh

/

yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

/

/

/

2
2
1
7
6
1
8
9
7
4
7
9
norte
oh
_
a
_
0
0
0
3
0
pag
d

/

.

yo

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

The language network and event semantics

Semantic knowledge:
Generalized, abstract information
about objects, propiedades, escenas,
comportamiento, and ideas.

Semantic processing:
The process of accessing and
manipulating semantic knowledge.

The language network:
A set of left-lateralized regions in the
frontal and temporal lobes that show
selective responses to spoken and
written language.

Global aphasia:
A severe form of language impairment
caused by damage to the language
network, resulting in substantial
impairments in both production and
comprensión.

forming novel thoughts. Aquí, we test the link between language and thought by examining the
role of the language network in a nonverbal combinatorial semantic task.

Recent evidence from neuroscience suggests that language processing is largely distinct from
other aspects of cognition (Fedorenko & Blank, 2020; Fedorenko & Varley, 2016). A network of
left-lateralized frontal and temporal brain regions (here referred to as the language network) tiene
been found to respond to written/spoken/signed words and sentences, but not to mental arith-
metic, music perception, executive function tasks, action/gesture perception, or computer pro-
gramming (Amalric & Dehaene, 2019; X. Chen et al., 2021; Fedorenko et al., 2011; Ivanova et al.,
2020; Jouravlev et al., 2019; Liu et al., 2020; MacSweeney et al., 2002; Monti et al., 2009, 2012;
Pritchett et al., 2018). Similarmente, investigations of patients with profound disruption of language
capacity ( global aphasia) have shown that some of these individuals can solve arithmetic and
logic problems, appreciate and create music, and think about others’ thoughts in spite of their
language impairment (Basso & Capitani, 1985; Luria et al., 1965; Varley et al., 2005; Varley &
Siegal, 2000), providing converging evidence that language is subserved by domain-specific
cognitive mechanisms.

Despite this significant progress in dissociating linguistic and nonlinguistic processing, the role
of the language network in nonverbal semantics remains unclear. Semantics is often considered
to be an integral part of linguistic processing (Altshuler et al., 2019; Binder et al., 2009; Fillmore,
2006; Milberg & Blumstein, 1981; Pinker & Levin, 1991; talmy, 2000): Each content word is
linked to an underlying semantic representation (lexical semantics), which then combine to form
phrase- and sentence-level meanings (combinatorial semantics). This tight integration between
language and semantics suggests that the frontal and temporal language regions may play an im-
portant role in storing and processing semantic information (see Hasson et al., 2015, for general
arguments against the separation of storage and processing/computation in the brain). Sin embargo,
many semantic representations can also be activated by nonverbal input (p.ej., the concept CAT
can be evoked not only by the word “cat,” but also by a picture or the sight of a cat), sugerencia
that language does not necessarily have a privileged role in semantic processing. En este trabajo, nosotros
ask whether the frontotemporal language network supports semantic processing for both verbal
and nonverbal stimuli or whether it is only engaged in the semantic processing of verbal input.

A large body of work has aimed to address the role of the language network in nonverbal
semantics; sin embargo, different sources of evidence have produced conflicting results.
Neuroimaging studies that explicitly compared verbal and nonverbal semantic processing of
objects (p.ej., Devereux et al., 2013; Fairhall & Caramazza, 2013; Handjaras et al., 2017;
Shinkareva et al., 2011; Vandenberghe et al., 1996; Visser et al., 2012), comportamiento (p.ej., Wurm &
Caramazza, 2019), and events (Baldassano et al., 2018; Hu et al., 2019; Jouen et al., 2015;
Thierry & Precio, 2006) often report overlapping activation in left-lateralized frontal and temporal
areas, which may reflect the engagement of the language network. A diferencia de, neuropsychology
studies have often reported dissociations between linguistic and semantic deficits in patients with
aphasia (p.ej., Antonucci & Reilly, 2008; Chertkow et al., 1997; Dickey & Warren, 2015; Jefferies
& Lambon Ralph, 2006; Saygın et al., 2004; cf. Saygın et al., 2003), suggesting that verbal and
nonverbal semantic processes rely on distinct neural circuits. Both groups of studies have limita-
tions that make it difficult to reconcile their findings. The neuroimaging studies have typically
relied on group analyses—an approach known to overestimate overlap in cases of nearby func-
tionally distinct areas (Nieto-Castañón & Fedorenko, 2012)—and/or do not report effect sizes,
which are critical for interpreting the functional profiles of the regions in question (a region that
responds similarly strongly to verbal and nonverbal semantic tasks plausibly supports computa-
tions that are different from a region that responds to both, but shows a two to three times stronger
response to verbal semantics; ver, p.ej., GRAMO. Chen et al., 2017, for discussion). Mientras tanto, el

Neurobiology of Language

177

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu
norte
oh

/

yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

/

/

/

2
2
1
7
6
1
8
9
7
4
7
9
norte
oh
_
a
_
0
0
0
3
0
pag
d

.

/

yo

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

The language network and event semantics

Event:
An action along with the entities
participating in that action.

Thematic role:
The role that a given entity plays in an
evento, such as agent or patient.

Agent:
The event participant administering
the action.

Patient:
The event participant that is being
acted upon.

Event plausibility:
The likelihood of a given event
happening in the real world.

aphasia studies have typically investigated cases where only some of the language regions were
damaged, leaving open the possibility that the intact portions of the language network were
still contributing to nonverbal semantic processing. Más, neuroimaging and aphasia studies
typically rely on different experimental paradigms, making it challenging to directly compare
their results.

It should also be noted that few neuropsychological studies (with the exception of Dresang
et al., 2019; Marshall et al., 1993) have investigated the processing of verbal and nonverbal events
(as opposed to individual objects or actions). Constructing event-level mental representations
requires object and action processing but is not reducible to them (Dresang et al., 2019) y ahí-
fore may engage additional cognitive operations. En particular, to understand an event, we must
identify relations between participating entities and assign them thematic roles (Estes et al.,
2011). This process of identifying who did what to whom has traditionally been considered a hall-
mark of the language system (Fillmore, 1968; Gruber, 1965). De este modo, if any aspect of semantic pro-
cessing requires language, event understanding would seem to be one of the strongest candidates.

Event processing has perhaps been most extensively investigated in EEG research, where a
number of studies have reported that semantic violations in visually presented scenes/events
evoke the N400 response, a marker of semantic processing (Coco et al., 2020; Cohn, 2020;
Jouen et al., 2019; Proverbio & Riva, 2009; Sitnikova et al., 2008; & lobo, 2013; Oeste &
Holcomb, 2002; see Kutas & Federmeier, 2011, para una revisión), similarly to semantic violations in
oraciones, where the N400 component was originally discovered (Kutas & Hillyard, 1980). El
EEG results have been taken to suggest that linguistic and visual semantic processing rely on a
shared mechanism. Sin embargo, because the neural generators of the N400 remain debated (Lau
et al., 2008, 2016; Matsumoto et al., 2005; Zhu et al., 2019), this evidence does not definitively
demonstrate the involvement of the language network in visual event processing.

Aquí, we synergistically combine neuroimaging and neuropsychological evidence to ask
whether the language network is engaged during and/or is necessary for nonverbal event seman-
tics. We focus on the understanding of agent–patient relations (“who did what to whom”) in vi-
sually presented scenes. Identification of thematic relations is critical to understanding and
generating sentences (Carlson & Tanenhaus, 1988; Fillmore, 2002; Jackendoff, 1987), but agent
and patient are not exclusively linguistic notions: They likely constitute part of humans’ core
conocimiento (Rissman & Majid, 2019; Spelke & Kinzler, 2007; Strickland, 2017; l. Wagner &
Lakusta, 2009) and are integral to visual event processing (Cohn & Paczynski, 2013; Hafri
et al., 2018). Investigating the role of the language network in processing agent–patient relations
therefore constitutes an important test of the relationship between language and combinatorial
event semantics.

We used two kinds of evidence in our study: (1) fMRI in neurotypical participants, y (2)
behavioral data from two individuals with global aphasia and a group of age-matched healthy
controls. All participants were asked to evaluate the plausibility of events, presented either as
oraciones (neurotypicals only) or as pictures. To ensure that participants could not rely on
low-level visual cues when evaluating picture plausibility, we used line drawings rather than pho-
tographs. The line drawings were highly controlled: Each picture pair depicted two animate par-
ticipants engaged in a certain interaction, but the participants’ roles in this interaction were either
plausible (p.ej., a cop arresting a criminal) or implausible (p.ej., a criminal arresting a cop). Este
manipulation allowed us to ensure that participants could not infer picture plausibility based
solely on the attributes of a single participant; bastante, they had to evaluate the event as a whole.

To foreshadow our results, we find that language-responsive brain areas in neurotypical partic-
ipants respond during the plausibility task for both sentences and pictures (although the responses

Neurobiology of Language

178

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu
norte
oh

/

yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

/

/

/

2
2
1
7
6
1
8
9
7
4
7
9
norte
oh
_
a
_
0
0
0
3
0
pag
d

/

.

yo

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

The language network and event semantics

are lower for pictures). Sin embargo, individuals with global aphasia, who sustained severe damage
to language areas, perform well on the picture plausibility task, suggesting that the language net-
work is not required for constructing combinatorial representations of visually depicted events.

MATERIALES Y MÉTODOS

Experimento 1: Is the Language Network Active During a Nonverbal Event Semantics Task?

Overview

In the first experiment, we presented neurotypical participants with sentences and pictures de-
scribing/depicting agent–patient interactions that were either plausible or implausible (Cifra 1),
while the participants were undergoing an fMRI scan. Participants performed a semantic judg-
ment task on the sentences and pictures, as well as a difficulty-matched low-level perceptual
control task on the same stimuli, en un 2 × 2 blocked design. In separate blocks, participants were
instructed to indicate either (i) whether the stimulus was plausible or implausible (the semantic
tarea) o (ii) whether the stimulus was moving to the left or right (the perceptual task). The lan-
guage regions in each participant were identified using a separate functional language localizer
tarea (sentences > nonwords contrast; Fedorenko et al., 2010). We then measured the response of
those regions to sentences and pictures during the semantic and perceptual tasks.

Participantes

Twenty-four participants took part in the fMRI experiment (11 femenino, edad media = 25 años, DE =
5.2). The participants were recruited from MIT and the surrounding Cambridge/Boston, MAMÁ,
community and paid for their participation. All were native speakers of English, had normal hear-
ing and vision, and had no history of language impairment. All were right-handed (as assessed by
Oldfield’s, 1971, handedness questionnaire, or self-report). Two participants had low behavior-
al accuracy scores (<60%), and one had right-lateralized language regions (as evaluated by the language localizer task; see below); they were excluded from the analyses, which were therefore based on data from 21 participants. The protocol for the study was approved by MIT’s Sample stimuli used in the experiment. For both sentences and pictures, participants Figure 1. were required to perform either a semantic plausibility judgment task (“Plausible or implausible?”) or a control perceptual task (“Moving left or right?”). The full set of materials is available at https:// osf.io/gsudr/. Neurobiology of Language 179 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u n o / l / l a r t i c e - p d f / / / / 2 2 1 7 6 1 8 9 7 4 7 9 n o _ a _ 0 0 0 3 0 p d / . l f b y g u e s t t o n 0 7 S e p e m b e r 2 0 2 3 The language network and event semantics Committee on the Use of Humans as Experimental Subjects (COUHES). All participants gave written informed consent in accordance with protocol requirements. Design, materials, and procedure All participants completed a language localizer task aimed at identifying language-responsive brain regions (Fedorenko et al., 2010) and the critical picture/sentence plausibility task. The localizer task was conducted in order to identify brain regions within individual partic- ipants that selectively respond to language stimuli. During the task, participants read sentences (e.g., NOBODY COULD HAVE PREDICTED THE EARTHQUAKE IN THIS PART OF THE COUNTRY) and lists of unconnected, pronounceable nonwords (e.g., U BIZBY ACWORRILY MIDARAL MAPE LAS POME U TRINT WEPS WIBRON PUZ) in a blocked design. Each stimulus consisted of twelve words/nonwords. For details of how the language materials were construct- ed, see Fedorenko et al. (2010). The materials are available at https://evlab.mit.edu/funcloc/. The sentences > nonword-lists contrast has been previously shown to reliably activate left-lateralized
frontotemporal language processing regions and to be robust to changes in the materials, tarea,
and modality of presentation (Fedorenko et al., 2010; Mahowald & Fedorenko, 2016; Scott et al.,
2017). Stimuli were presented in the center of the screen, one word/nonword at a time, en el
tasa de 450 ms per word/nonword. Each stimulus was preceded by a 100 ms blank screen and
followed by a 400 ms screen showing a picture of a finger pressing a button, and a blank screen
Por otro 100 EM, for a total trial duration of 6 s. Participants were asked to press a button
whenever they saw the picture of a finger pressing a button. This task was included to help partic-
ipants stay alert and awake. Condition order was counterbalanced across runs. Experimental
blocks lasted 18 s (con 3 trials per block), and fixation blocks lasted 14 s. cada carrera (consisting
de 5 fixation blocks and 16 experimental blocks) lasted 358 s. Each participant completed 2 carreras.

The picture plausibility task included two types of stimuli: (1) black-and-white line drawings
depicting plausible and implausible agent–patient interactions (created by an artist for this
estudiar), y (2) simple sentences describing the same interactions. Sample stimuli are shown
En figura 1, and a full list of materials is available on this article’s website (https://osf.io/gsudr/).
Forty plausible-implausible pairs of pictures, and forty plausible-implausible pairs of correspond-
ing sentences were used. The full set of materials was divided into two lists, such that List 1 usado
plausible pictures and implausible sentences for odd-numbered items, and implausible pictures
and plausible sentences for even-numbered items, y lista 2 did the opposite. De este modo, each list
contained either a picture or a sentence version of any given event. Stimuli were presented in
a blocked design (each block included either pictures or sentences) and were moving either to
the right or to the left for the duration of stimulus presentation. At the beginning of each block,
participants were told which task they would have to perform next: semantic or perceptual. El
semantic task required them to indicate whether the depicted/described event is plausible or
implausible by pressing one of two buttons. The perceptual task required them to indicate the
direction of stimulus movement (right or left). To ensure that participants always perform the right
tarea, a reminder about the task and the response buttons (“plausible=1/implausible=2” or
“moving right=1/left=2”) was visible in the lower right-hand corner of the screen for the duration
of the block. Each stimulus (a picture or a sentence) was presented for 1.5 s, con 0.5 s intervals
between stimuli. Each block began with a 2 s instruction screen to indicate the task, y estafa-
sisted of 10 ensayos, for a total duration of 22 s. Trials were presented with a constraint that the
same response (plausible/implausible in the semantic condition, or right/left in the perceptual
condición) did not occur more than 3 times in a row. Each run consisted of 3 fixation blocks and
8 experimental blocks (2 per condition: semantic task − pictures; semantic task − sentences;
perceptual task − pictures; perceptual task − sentences) and lasted 242 s (4 mín. 2 s). The order

Neurobiology of Language

180

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu
norte
oh

/

yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

/

/

/

2
2
1
7
6
1
8
9
7
4
7
9
norte
oh
_
a
_
0
0
0
3
0
pag
d

.

/

yo

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

The language network and event semantics

of conditions was palindromic and varied across runs and participants. Each participant com-
pleted 2 carreras.

fMRI data acquisition

Structural and functional data were collected on the whole-body, 3 Tesla, Siemens Trio scanner
with a 32-channel head coil, at the Athinoula A. Martinos Imaging Center at the McGovern
Institute for Brain Research at MIT. T1-weighted structural images were collected in 176 sagittal
slices with 1 mm isotropic voxels (TR = 2,530 EM, TE = 3.48 EM). Funcional, blood oxygenation
level dependent (BOLD) data were acquired using an echo-planar imaging sequence (con un
90o flip angle and using generalized autocalibrating partial parallel acquisition [GRAPPA]
with an acceleration factor of 2), with the following acquisition parameters: thirty-one 4-mm-
thick near-axial slices acquired in the interleaved order (con 10% distance factor), 2.1 mm ×
2.1 mm in-plane resolution, FoV in the phase encoding (A>>P) direction 200 mm and matrix
tamaño 96 mm × 96 mm, TR = 2,000 ms and TE = 30 EM. The first 10 s of each run were excluded to
allow for steady state magnetization.

fMRI data preprocessing

MRI data were analyzed using SPM12 and custom MATLAB scripts (available in the form of an
SPM toolbox from http://www.nitrc.org/projects/spm_ss). Each participant’s data were motion
corrected and then normalized into a common brain space (the Montreal Neurological Institute
[MNI] template) and resampled into 2 mm isotropic voxels. The data were then smoothed
con un 4 mm FWHM Gaussian filter and high-pass filtered (en 200 s). Effects were estimated
using a General Linear Model in which each experimental condition was modeled with a
boxcar function (modeling entire blocks) convolved with the canonical hemodynamic re-
sponse function.

Defining functional regions of interest

The critical analyses were restricted to individually defined language functional regions of
interés (fROIs). These fROIs were defined using the Group-Constrained Subject-Specific ap-
proach (Fedorenko et al., 2010; Julian et al., 2012), where a set of spatial parcels is combined
with each individual subject’s localizer activation map to constrain the definition of individual
fROIs. The parcels mark the expected gross locations of activations for a given contrast based on
prior work and are sufficiently large to encompass the extent of variability in the locations of
individual activations. Aquí, we used a set of six parcels derived from a group-level probabilistic
activation overlap map for the sentences > nonwords contrast in 220 Participantes. These parcels
included two regions in the left inferior frontal gyrus (IFG, IFGorb), one in the left middle frontal
gyrus (MFG), two in the left temporal lobe (AntTemp and PostTemp), and one extending into
the angular gyrus (AngG). (The parcels are available at https://osf.io/gsudr/). Within each parcel,
we selected the top 10% most responsive voxels, based on the t values for the sentences > non-
words contrast (ver figura 1 in Blank et al., 2014, or Figure 1 in Mahowald & Fedorenko, 2016,
for sample fROIs). Individual-level fROIs defined in this way were then used for subsequent anal-
yses that examined the behavior of the language network during the critical picture/sentence
plausibility task.

Examining the functional response profiles of the language fROIs

For each language fROI in each participant, we averaged the responses across voxels to get a
value for each of the four critical task conditions (semantic task on pictures, semantic task on

Neurobiology of Language

181

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu
norte
oh

/

yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

/

/

/

2
2
1
7
6
1
8
9
7
4
7
9
norte
oh
_
a
_
0
0
0
3
0
pag
d

/

.

yo

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

The language network and event semantics

oraciones, perceptual task on pictures, perceptual task on sentences). We then ran a linear
mixed-effect regression model with two fixed effects (stimulus type and task) and two random
intercepts (participant and fROI). We used sum coding for both stimulus type and task. Planned
follow-up comparisons examined response to sentences and pictures during the semantic
task within each fROI; the results were FDR-corrected (Benjamini & Hochberg, 1995) para el
number of regions. The formula used for the main mixed linear effects model was EffectSize (cid:1)
StimType*Task + (1|fROI ) + (1|Participant). The formula used for the follow-up comparisons
was EffectSize (cid:1) StimType*Task + (1|Participant).

The analysis was run using the lmer function from the lme4 R package (Bates et al., 2015);
statistical significance of the effects was evaluated using the lmerTest package (Kuznetsova
et al., 2017).

Behavioral analyses

To analyze differences in response times (RT) and accuracy across conditions, we ran a linear
(for RT) and logistic (for accuracy) mixed effect regression model that aimed to mirror the
structure of the mixed effect models in the neuroimaging analyses. Específicamente, the behavioral
models used task and stimulus type as fixed effects (with sum contrast coding) and participant
and item as random intercepts. The formulae were Accuracy/RT (cid:1) StimType*Task + (1|fROI ) +
(1|Participant).

Experimento 2: Is the Language Network Required for a Nonverbal Event Semantics Task?

Overview

In the second experiment, we examined two individuals with global aphasia, a disorder char-
acterized by severe linguistic impairments, together with a group of age-matched controls. El
participants performed two critical tasks: the picture plausibility judgment task (identical to the
“picture, semantic” condition from Experiment 1) and the sentence–picture matching task
based on the same set of pictures.

Participantes

Two participants with global aphasia, S.A. and P.R., took part in the study. Both had large lesions
that had damaged the left IFG, the inferior parietal lobe (supramarginal and angular gyri), y el
superior temporal lobe. At the time of testing, they were 68 y 70 years old respectively. S.A.
era 22 años 5 months post-onset of his neurological condition, and P.R. era 14 años 7 meses
post-onset. S.A. had a subdural empyema in the left sylvian fissure, with associated meningitis that
led to a secondary vascular lesion in left middle cerebral artery territory. P.R. also had a vascular
lesion in left middle cerebral artery territory.

Both participants were male, native English speakers, and did not present with visual impair-
mentos. S.A. was premorbidly right-handed; P.R. was premorbidly left-handed, but a left hemi-
sphere lesion that resulted in profound aphasia indicated that he, like most left-handers, was left
hemisphere dominant for language (Pujol et al., 1999). Both individuals were classified as severely
agrammatic (Mesa 1), but their nonlinguistic cognitive skills were mostly spared (Mesa 2). Ellos
performed the semantic task and the sentence–picture matching task with a 7-month period
between the two.

We also tested two sets of neurotypical control participants, one for the semantic task and
one for the language task. The semantic task control participants were 12 participantes sanos
(7 hembras) ranging in age from 58 a 78 años (mean age 65.5 años). The language task control

Neurobiology of Language

182

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu
norte
oh

/

yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

/

/

/

2
2
1
7
6
1
8
9
7
4
7
9
norte
oh
_
a
_
0
0
0
3
0
pag
d

.

/

yo

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

The language network and event semantics

participants were 12 participantes sanos (5 hembras) ranging in age from 58 a 78 años (significar
edad 64.7 años). None of the healthy participants had a history of speech or language disorders,
neurological diseases, or reading impairments. All were native English speakers and had normal,
or corrected-to-normal, visión.

Participants undertook the experiments individually, in a quiet room. An experimenter was
present throughout the testing session. The stimuli were presented on an Acer Extensa 5630G
laptop, with the experiment built using DMDX (Forster & Forster, 2003). Ethics approval was
granted by the UCL Research Ethics Committee (LC/2013/05). All participants provided
informed consent prior to taking part in the study.

Semantic task: Picture plausibility judgments

The same picture stimuli were used as in Experiment 1 (ver figura 1), plus one additional
plausible-implausible pair of pictures (which was omitted from the fMRI experiment to have a
total number of stimuli be divisible by four, for the purposes of grouping materials into blocks
and runs), for a total of 82 pictures (41 plausible-implausible pairs). Four of the 82 pictures were
used as training items (see below).

The stimuli were divided into two sets, with an equal number of plausible and implausible
pictures; each plausible-implausible pair was split across the two sets, to minimize repetition of
the same event participants within a set. The order of the trials was randomized within each set,
so that each participant saw the pictures in a different sequence. A self-timed break was placed
between the two sets.

Prior to the experiment, participants were shown two pairs of pictures, which acted as training
elementos. The pairs consisted of one plausible and one implausible event. They were given clear
instructions to focus on the relationship between the two characters and assess whether they
thought the interaction was plausible, in adherence with normal expectations, or implausible,
at odds with expectations. They were asked to press a green tick (the left button on the mouse) si
they thought the picture depicted a plausible event, and a red cross (the right button on the
mouse) if they thought the picture depicted an implausible event. They were asked to do so
as quickly and accurately as possible. The pictures appeared for a maximum of 8 s, con el
interstimulus interval of 2 s. Accuracies and reaction times were recorded. Participants had
the opportunity to ask any questions, and the instructions for participants with aphasia were
supplemented by gestures to aid comprehension of the task. Participants had to indicate that
they understood the task prior to starting.

Language task: Sentence to picture matching

The same 82 pictures were used as in the plausibility judgment experiment. en esta tarea, a sen-
tence was presented below each picture that either described the picture correctly (p.ej., “the cop
is arresting a criminal” for the first sample picture in Figure 1) or had the agent and patient
switched (“the criminal is arresting the cop”). Simple active subject-verb-object sentences were
usado. Combining each picture with a matching and a mismatching sentence resulted in 164 ensayos
en total.

For the control participants, the trials were split into two sets of 82, with an equal number of
plausible and implausible pictures, as well as an equal number of matches and mismatches in
each set. In order to avoid tiring the participants with aphasia, the experiment was administered
across two testing sessions each consisting of two sets of 41 stimuli and occurring within the
same week. For both groups, the order of the trials was randomized separately for each

Neurobiology of Language

183

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu
norte
oh

/

yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

/

/

/

2
2
1
7
6
1
8
9
7
4
7
9
norte
oh
_
a
_
0
0
0
3
0
pag
d

.

/

yo

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

The language network and event semantics

partícipe, and no pictures from the same pair (p.ej., an event involving a cop and a criminal)
appeared in a row. A self-timed break was placed between the two sets.

Prior to the experiment, participants were told that they would see a series of pictures with
accompanying sentences, and their task was to decide whether the sentence matched the
depicted event. They were asked to press a green tick (the left button on the mouse) si ellos
thought the sentence matched the picture, and a red cross (the right button on the mouse) si ellos
thought the sentence did not match the picture. They were asked to do so as quickly and accu-
rately as possible. The picture/sentence combinations appeared for a maximum of 25 s, con el
interstimulus interval of 2 s. Accuracies and reaction times were recorded. As in the critical task,
participants had the opportunity to ask any questions, and the instructions for participants with
aphasia were supplemented by gestures.

Análisis de los datos

We used the exact binomial test to test whether patients’ performance on either task was signifi-
cantly above chance, as well as the Crawford and Howell (1998) test for dissociation to compare
patient performance relative to controls across the two tasks. We excluded all items with reaction
times and/or accuracies outside 3 standard deviations of the control group mean (4 items for the
semantic task and 11 items for the sentence–picture matching task).

Estimating the damage to the language network in patients with aphasia

In order to visualize the extent of the damage to the language network, we combined the avail-
able structural MRI of one patient with aphasia (P.R.) with a probabilistic activation overlap map
of the language network. The map was created by overlaying thresholded individual activation
maps for the language localizer contrast (sentences > nonwords, as described in Experiment 1) en
220 participantes sanos. The maps were thresholded at the p < 0.001 whole-brain uncorrected level, binarized, and overlaid in the common space, so that each voxel contains information on the proportion of participants showing a significant language localizer effect (see Woolgar et al., 2018, for more details). The map can be downloaded from https://osf.io/gsudr/. RESULTS Experiment 1: Is the Language Network Active During a Nonverbal Event Semantics Task? Behavioral results All participants were engaged during the task: the overall response rate was 91.7% (sentence semantic 89.9%; sentence perceptual 91.6%; picture semantic 93.6%; picture perceptual 91.9%). Average response times were 1.27 s (SD = 0.46) for the semantic sentence task, 1.16 s (SD = 0.38) for the perceptual sentence task, 1.22 s (SD = 0.35) for the semantic picture task, and 1.19 (SD = 0.36) for the perceptual picture task. A linear mixed effect model with task and stimulus type as fixed effects and participant and item number as random intercepts showed a small main effect of task (semantic > perceptual; (cid:1) = 0.06, pag < 0.001), no main effect of stimulus type ((cid:1) = 0.02, p = 0.287), and no interaction between task and stimulus type ((cid:1) = 0.03, p = 0.359). Average accuracies were 0.81 for the semantic sentence task, 0.79 for the perceptual sen- tence task, 0.75 for the semantic picture task, and 0.75 for the perceptual picture task. A logistic mixed effect model with the same structure as the linear RT model above showed no significant effects of either task ((cid:1) = 0.09, p = 0.198) or stimulus type ((cid:1) = 0.12, p = 0.101), and no interaction between them ((cid:1) = 0.04, p = 0.759). Due to a technical error, accuracy data for 14 participants were only recorded for one of the two runs. Neurobiology of Language 184 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u n o / l / l a r t i c e - p d f / / / / 2 2 1 7 6 1 8 9 7 4 7 9 n o _ a _ 0 0 0 3 0 p d / . l f b y g u e s t t o n 0 7 S e p e m b e r 2 0 2 3 The language network and event semantics Neuroimaging results Although diverse nonlinguistic tasks have been previously shown not to engage the language network (Fedorenko & Varley, 2016), we found here that the language regions responded more strongly during the semantic task on both sentences and pictures compared to the perceptual control task (Figure 2A). A linear mixed effect model with task and stimulus type as fixed effects and participant and fROI as random effect intercepts showed a significant effect of task (semantic > perceptual; (cid:1) = 0.93, pag < 0.001), and stimulus type (sentences > pictures; (cid:1) = 0.23,
pag = 0.018), and an interaction between them ((cid:1) = 0.43, pag = 0.025). These results demonstrate
that the language network responds to the semantic task performed on both sentences and
pictures, although this task effect is stronger for sentences.

To investigate individual brain regions comprising the language network, we conducted
follow-up analyses on the activity of individual fROIs (FDR-corrected for the number of re-
gions) (Figura 2B). These revealed a significant semantic > perceptual task effect in all fROIs
(Mesa 1). The sentences > pictures stimulus type effect was observed in two fROIs, located in
anterior and posterior left temporal lobe. The interaction between task and stimulus type was
not significant in any fROI, a pesar de, numerically, responses to sentences during the semantic
task were stronger than responses to any other condition in all fROIs except the left AngG fROI.
We conclude that sensitivity to the semantic task is a general property of all regions in the
language network rather than an effect driven by a subset of regions.

To facilitate the comparison of our results with prior neuroimaging studies, we also performed
a random effects whole-brain group analysis (see Figure S1 in the online supporting information
located at https://www.mitpressjournals.org/doi/suppl/10.1162/nol_a_00030), which yielded
results similar to the fROI-based analyses described above. Específicamente, we found that the
semantic > perceptual contrast for both sentences and pictures activates left-lateralized frontal
and temporal regions that overlap with the language parcels (used to constrain the definition of
individual language fROIs). The extent of semantics-evoked activation in the left lateral temporal
areas was weaker for pictures than sentences (the opposite was true on the ventral surface of the
left temporal lobe). Nota, sin embargo, that these results should be interpreted with caution, desde
group analyses might conflate functionally distinct regions that are anatomically close (Nieto-
Castañón & Fedorenko, 2012), especially in association cortex, which tends to be functionally
heterogeneous (Blank et al., 2017; Braga et al., 2019; Fedorenko & Kanwisher, 2009; Frost &
Goebel, 2012; Tahmasebi et al., 2012; Vázquez-Rodríguez et al., 2019).

En general, the first experiment revealed that the language network is strongly and significantly
recruited for semantic processing of events presented not only verbally (through sentences),
but also nonverbally (through pictures). Específicamente, the language network is active when
we interpret pictures that depict agent–patient interactions and relate them to stored world
conocimiento. It is worth noting, sin embargo, that responses to the semantic task are stronger for
sentences than for pictures (as shown by the interaction between task and stimulus type at
the network level; Figura 2A), suggesting that the language network may play a less important
role in nonverbal semantic processing. To test whether the engagement of the language net-
work is necessary for comprehending visually presented events, we turn to behavioral evi-
dence from individuals with global aphasia.

Experimento 2: Is the Language Network Required for a Nonverbal Event Semantics Task?

We examined two individuals with global aphasia (S.A. and P.R.). Both had suffered large vascular
lesions that resulted in extensive damage to left perisylvian cortex, including the language network

Neurobiology of Language

185

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu
norte
oh

/

yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

/

/

/

2
2
1
7
6
1
8
9
7
4
7
9
norte
oh
_
a
_
0
0
0
3
0
pag
d

.

/

yo

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

The language network and event semantics

Mesa 1.

Regression model terms for fROI-based statistical analyses

ROI
IFGorb

IFG

MFG

Regression Term

Interceptar

Stimulus (Sent>Pic)

Tarea (Sem>Perc)

Stimulus:Tarea

Interceptar

Stimulus (Sent>Pic)

Tarea (Sem>Perc)

Stimulus:Tarea

Interceptar

Stimulus (Sent>Pic)

Tarea (Sem>Perc)

Stimulus:Tarea

AntTemp

Interceptar

Stimulus (Sent>Pic)

Tarea (Sem>Perc)

Stimulus:Tarea

PostTemp

Interceptar

AngG

Stimulus (Sent>Pic)

Tarea (Sem>Perc)

Stimulus:Tarea

Interceptar

Stimulus (Sent>Pic)

Tarea (Sem>Perc)

Stimulus:Tarea

Beta
0.33

0.34

1.25

0.54

1.33

0.27

1.12

0.57

0.98

0.2

0.73

0.74

0.22

0.49

0.6

0.41

0.5

0.43

0.68

0.44

1.13

−0.35

1.16

−0.11

p value
0.104

0.215

<0.001 0.283 <0.001 0.259 <0.001 0.28 0.002 0.259 <0.001 0.231 0.104 0.002 <0.001 0.24 <0.001 0.006 <0.001 0.24 0.002 0.215 <0.001 0.823 Note. The p values are FDR-corrected for the number of regions (n = 6). Significant terms are highlighted in bold. The fROI labels correspond to the approximate anatomical locations: IFGorb – the orbital portion of the left inferior frontal gyrus; IFG – left inferior frontal gyrus; MFG – left middle frontal gyrus; AntTemp – left anterior temporal cortex; PostTemp – left posterior temporal cortex; AngG – left angular gyrus. (see Figure 3 for lesion images, including a probabilistic map of the language network based on fMRI data from neurotypical participants, overlayed onto P.R.’s MRI). Both individuals were severely agrammatic (Table 2). Whereas they had some residual lexical comprehension ability, scoring well on tasks involving word–picture matching and synonym matching across spoken and written modalities, their lexical production was impaired. Both failed to correctly name a single item in a spoken picture-naming task. S.A. displayed some Neurobiology of Language 186 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u n o / l / l a r t i c e - p d f / / / / 2 2 1 7 6 1 8 9 7 4 7 9 n o _ a _ 0 0 0 3 0 p d . / l f b y g u e s t t o n 0 7 S e p e m b e r 2 0 2 3 The language network and event semantics Table 2. Results of linguistic assessments for participants with global aphasia Lexical Tests ADA spoken word to picture matching Chance Score 16.5 ADA written word to picture matching ADA spoken synonym matching ADA written synonym matching PALPA 54 spoken picture naming PALPA 54 written picture naming Syntactic Tests Comprehension of spoken reversible sentences Comprehension of written reversible sentences Written grammaticality judgments 16.5 80 80 n/a n/a 50 50 20 S.A. 60/66* 62/66* P.R. 61/66* 66/66* 123/160* 113/160* 121/160* 145/160* 0/60 24/60 0/60 2/60 49/100 38/100 42/100 49/100 26/40* 21/40 Verbal Working Memory PALPA 13-digit span (recognition) n/a 3 items 4 items Note. The tests were taken from the Action for Dysphasic Adults (ADA) Auditory Comprehension Battery (Franklin et al., 1992) and the Psycholinguistic Assessment of Language Processing in Aphasia (PALPA; Kay et al., 1992) or developed for the purpose of the study. * Indicates above chance performance (p < 0.05). residual written word production ability, scoring 24 out of 60 in a written picture-naming task. P.R., however, performed poorly in the written task, correctly naming just 2 out of 60 items. S.A. and P.R.’s syntactic processing was severely disrupted. They scored at or below chance in the reversible spoken and written sentence comprehension tasks (sentence–picture matching), which included active sentences (e.g., “the man kills the lion”), and passive sentences (e.g., “the man is killed by the lion”). They also scored near chance in written grammaticality judgment assessments. The patients’ comprehension performance was impaired regardless of whether the sentences were presented visually or auditorily, indicating that the impairment was linguistic rather than perceptual. To determine whether the sentence comprehension impairments could be explained by working memory deficits, we evaluated the patients’ phonological working memory by means of a digit span test (using a recognition paradigm that did not require language production). The patients’ working memory span was somewhat reduced: S.A. and P.R. had the scores of 3 and 4 items, respectively, compared to the neurotypical age-matched controls who had an average score of 6.4 (SD = 0.6; see Zimmerer et al., 2019). However, even such reduced working memory span should have been sufficient for processing the simple subject-verb-object sentences that were used in the syntactic assessments, as well as in the critical task described below. Thus, S.A. and P.R.’s difficulties with linguistic tasks could not be attributed to phono- logical working memory problems. Importantly and in line with prior arguments (Fedorenko & Varley, 2016), S.A. and P.R. per- formed relatively well on nonverbal reasoning tasks, which included measures of fluid Neurobiology of Language 187 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u n o / l / l a r t i c e - p d f / / / / 2 2 1 7 6 1 8 9 7 4 7 9 n o _ a _ 0 0 0 3 0 p d . / l f b y g u e s t t o n 0 7 S e p e m b e r 2 0 2 3 The language network and event semantics Figure 2. BOLD response during the four experimental conditions within (A) the language network as a whole and (B) each of the six language fROIs. The fROI labels correspond to approximate anatomical locations: IFGorb − the orbital portion of the left inferior frontal gyrus; IFG − left inferior frontal gyrus; MFG − left middle frontal gyrus; AntTemp − left anterior temporal cortex; PostTemp − left posterior temporal cortex; AngG − left angular gyrus. Within each parcel, the responses to the critical experiment conditions are extracted from the top 10% most language- responsive voxels (selected in each of the 21 individuals separately). Error bars indicate standard error of the mean across participants; dots indicate individual participants’ responses. l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u n o / l / l a r t i c e - p d f / / / / 2 2 1 7 6 1 8 9 7 4 7 9 n o _ a _ 0 0 0 3 0 p d / . l f b y g u e s t t o n 0 7 S e p e m b e r 2 0 2 3 Figure 3. Structural MRI images from (A) S.A. and (B) P.R. (C) Probabilistic language activation overlap map overlaid on top of P.R.’s structural MRI image. The heatmap values range from 0.01 (red) to 0.5 (yellow) and correspond to proportions of individuals (in a set of n = 220) that show a significant language localizer (sentences > nonwords) effect in that voxel. As can be seen, the lesion
covers most left hemisphere areas with voxels that likely belong to the language network.

Neurobiology of Language

188

The language network and event semantics

Mesa 3.

Results of nonlinguistic assessments for participants with global aphasia

Reasoning Tests
Raven’s Colored Progressive Matrices

Raven’s Standard Progressive Matrices

Pyramids and Palm Trees

(3 picture version)

S.A.
36/36

53/60

50/52

P.R.
34/36

36/60

47/52

Visual Pattern Test

11.5 (90th percentile*)

8.6 (40th percentile*)

Nota. * Percentiles are calculated with respect to adults in the same age range with no neurological impairment.

intelligence (Raven’s Standard/Colored Progressive Matrices; Raven & Raven, 2003), object se-
mantics (Pyramids and Palm Trees test; Howard & Patterson, 1992), and visual working memory
( Visual Pattern Test; Della Sala et al., 1999), indicating that the extensive brain damage in these
patients did not ubiquitously affect all cognitive abilities (Mesa 3). Such a selective impairment
of linguistic skills allowed us to examine the causal role of language in nonverbal event
semantics.

To test whether global aphasia affects general event semantics, we measured S.A. and P.R.’s
performance on two tasks: (1) the picture plausibility task, identical to the pictures/semantic-task
condition from Experiment 1, y (2) a sentence–picture matching task, during which partici-
pants saw a picture together with a sentence in which the agent and the patient either matched
the picture or were switched (“a cop is arresting a criminal” vs. “a criminal is arresting a cop”);
participants had to indicate whether or not the sentence matched the picture. The sentence–
picture matching task was similar to the reversible sentence comprehension task in Table 2,
except that the pictures were identical to the pictures from the plausibility task and all sen-
tences used active voice. For each task, patient performance was compared with the

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu
norte
oh

/

yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

/

/

/

2
2
1
7
6
1
8
9
7
4
7
9
norte
oh
_
a
_
0
0
0
3
0
pag
d

/

.

yo

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Individuals with profound aphasia perform well on the picture plausibility judgment task
Cifra 4.
but fail on the sentence–picture matching task. Patient accuracies are indicated in blue (PR) y
verde (SA); average controls’ performance is shown as gray bars; individual controls’ performance
(norte = 12) is shown as gray dots. The dotted line indicates chance performance.

Neurobiology of Language

189

The language network and event semantics

performance of 12 age-matched controls (58−78 years [significar 65.5 años] for the picture plau-
sibility task; 58−78 years [significar 64.7 años] for the sentence–picture matching task).

The results showed a clear difference in performance between the picture plausibility task
and the sentence–picture matching task (Cifra 4), despite the fact that both tasks used the same
set of pictures. Both individuals with global aphasia and control participants performed well
above chance when judging picture plausibility. Neurotypical controls had a mean accuracy
de 95.7% (DE = 3.8%). Aphasia patients had mean accuracies of 91.0% (S.A.; 1.2 SD below
promedio) y 84.6% (P.R.; 3.0 SD below average); the exact binomial test showed that perfor-
mance of both patients was above chance (S.A., pag < 0.001, 95% CI [0.82, 0.96]; P.R., p < 0.001, 95% CI [0.75, 0.92]). Although their performance was slightly below the level of the controls, the data indicate that both patients were able to process complex semantic (agent–patient) relations to evaluate the plausibility of depicted events. In the sentence–picture matching task, control participants performed close to ceiling, with a mean accuracy of 98.3% (SD = 1.1%). In contrast, both patients were severely impaired: S.A. had a mean accuracy of 60.8% and P.R. had a mean accuracy of 46.4%. The exact binomial test showed that P.R.’s performance was at chance ( p = 0.464, 95% CI [0.38, 0.55]), while S.A.’s performance was above chance ( p = 0.009, 95% CI [0.53, 0.69]) but still drastically lower than that of the controls. This result concurs with S.A.’s and P.R.’s poor performance on the reversible sentence comprehension tasks, which had a similar setup but used different materials. However, it stands in stark contrast with the participants’ ability to interpret agent–patient interactions in pictures. The Crawford and Howell (1998) t test indicated a significant dissociation between the picture plausibility task and the sentence–picture matching task for both individuals (S.A., t(11) = 18.00, p < 0.001; P.R., t(11) = 24.20, p < 0.001). This dissociation held for both hit rate and false alarm rate (Figure S2). The findings from Experiment 2 demonstrate that, in spite of severe linguistic impairments, individuals with global aphasia were able to access information about event participants depicted in a visual scene, the action taking place between them, the roles they perform in the context of this action, and the real-world plausibility of these roles, indicating that none of these processes require the presence of a functional language network. DISCUSSION The relationship between language and thought has been long debated, both in neuroscience (e.g., Binder & Desai, 2011; Bookheimer, 2002; Fedorenko & Varley, 2016; Friederici, 2020) and other fields (e.g., Carruthers, 2002; Hauser et al., 2002; Vygotsky, 2012; Winograd, 1976). Here, we ask whether language-responsive regions of the brain are essential for a core component of thought: processing combinatorial semantic representations. We demonstrate that left hemisphere language regions are active during the semantic processing of events shown as pictures, although the semantic processing of events shown as sentences elicits a stronger response. We further show that the language network is not essential for nonverbal event seman- tics, given that the two individuals with global aphasia, who lack most of their left hemisphere language network, can still evaluate the plausibility of visually presented events. Our study advances the field in three ways: (i) it explores relational semantic processing in the domain of events, moving beyond the semantics of single objects—the focus of most prior neuroscience work on conceptual processing; (ii) it evaluates neural overlap between verbal and nonverbal semantics in fMRI at the level of individual participants; and (iii) it provides causal evidence in support of a dissociation between language and nonverbal event semantics. In the remainder of the article, we discuss the implications of our results. Neurobiology of Language 190 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u n o / l / l a r t i c e - p d f / / / / 2 2 1 7 6 1 8 9 7 4 7 9 n o _ a _ 0 0 0 3 0 p d . / l f b y g u e s t t o n 0 7 S e p e m b e r 2 0 2 3 The language network and event semantics The Language Network Is Not Required for Nonverbal Event Semantics Semantic processing of events is a complex, multi-component process. For instance, deciding whether or not an event is plausible requires one to (1) identify the relevant event participants, (2) determine the action taking place between them, (3) decipher the role that each event partici- pant is performing (in our task, agent vs. patient), and finally, (4) estimate the likelihood that a given participant would be the agent/patient of the relevant action. Whereas the first three components can, at least in part, be attributed to input-specific processes (e.g., high-level vision), establishing plausibility cannot be solely attributed to perception: In order to decide whether a cop arresting a criminal is more likely than a criminal arresting a cop, participants need to draw on their world knowledge. We demonstrate that this highly abstract process can proceed even when the language network is severely impaired, thus providing strong evidence that a func- tional language network is not required for nonverbal semantic processing. The functional dissociation between language-based and vision-based semantic judgments of events accords with the fact that both non-human animals and preverbal infants are capable of complex event processing (Seed & Tomasello, 2010; Spelke, 1976) and that specialized neural mechanisms, distinct from the language network, have been associated with visual understand- ing of actions (Fang et al., 2016; Häberling et al., 2016; Tarhan & Konkle, 2020) and interactions between animate and/or inanimate entities (Fischer et al., 2016; Walbrin et al., 2018). These neural mechanisms are either bilateral or right-lateralized, which constitutes further evidence of their dissociation from language, which is typically left-lateralized. Our results are also consistent with reports of a dissociation between verbal and nonverbal semantic processing of single objects in patients with aphasia (e.g., Antonucci & Reilly, 2008; Bi et al., 2011; Chertkow et al., 1997; Jefferies & Lambon Ralph, 2006; Lambon Ralph et al., 2010) and semantic dementia (e.g., Binney et al., 2016; Gorno-Tempini et al., 2004; Mion et al., 2010; Snowden et al., 2018; Thompson et al., 2003). Those studies typically report that linguistic impairments arise as a result of left hemisphere damage, whereas nonverbal semantic processing deficits are considered to be caused by either bilateral (Lambon Ralph et al., 2017) or right- lateralized lesions (Gainotti, 2011, 2015). Our work contributes to this literature by showing that the language-semantics dissociation holds not only for single concepts but also for combinato- rial event-level representations (see also Colvin et al., 2019; Dickey & Warren, 2015). Although we only test two individuals with global aphasia, these data provide an important contribution to the field because of the unique nature of the impairment in these individuals: large-scale disrup- tion of multiple linguistic functions and relatively preserved nonverbal cognition. To test the generalizability of our findings, future work should evaluate a larger sample of individuals with such a dissociation and comprehensively assess both verbal and nonverbal semantic processing of objects, actions, and events. If language is not essential for event semantics, why is the language network active during a nonverbal event semantics task? It is possible that neurotypical participants partially recode pictorial stimuli into a verbal format (Greene & Fei-Fei, 2014; Trueswell & Papafragou, 2010), which could provide access to linguistic representations as an additional source of task-relevant information (Connell & Lynott, 2013). Indeed, text-based computational models developed in recent years have been shown to successfully perform a wide range of “semantic” tasks, such as inference, paraphrasing, and question answering (Brown et al., 2020; Devlin et al., 2018, among others). Even simple n-gram models can be used to determine the probability of certain events by, for example, estimating the probability that the phrase “is arresting” directly follows “cop” versus “criminal.” Such language-based semantic information is distinct from non-language- based world knowledge (Clark, 2004; Lucy & Gauthier, 2017), and both kinds of information Neurobiology of Language 191 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u n o / l / l a r t i c e - p d f / / / / 2 2 1 7 6 1 8 9 7 4 7 9 n o _ a _ 0 0 0 3 0 p d / . l f b y g u e s t t o n 0 7 S e p e m b e r 2 0 2 3 The language network and event semantics can be flexibly used depending on task demands (Willits et al., 2015). As a result, it is possible that linguistic resources (housed in the language network) provide an additional source of infor- mation when neurotypical individuals determine visual event plausibility. The absence of this additional information source may account for the small decrement in performance observed in participants with aphasia relative to the control participants. One might speculate that this “language-based” semantic processing route plays a primary role in neurotypical participants, whereas patients with aphasia rely on some alternative route that arose due to the functional reorganization of the brain postinjury. However, we consider this possibility unlikely. Past behavioral evidence from experiments in neurotypical individuals shows that verbal recoding of visual information is relatively slow and can only occur after semantic information has been retrieved from the picture (Potter et al., 1986; Potter & Faulconer, 1975). Furthermore, participants do not typically generate covert verbal labels for visually presented objects unless instructed to do so (Dahan et al., 2001; Magnuson et al., 2003; Rehrig et al., 2020; cf. Meyer et al., 2007) or unless the task imposes memory demands (Pontillo et al., 2015). Our stimuli depicted complex two-participant events, making verbal recoding even more effortful than recoding of single objects and, therefore, unlikely to occur during a task that does not require linguistic label generation (Papafragou et al., 2008). Finally, even if individuals with aphasia did rely on a compensatory (e.g., right hemisphere mediated) mechanism for semantic processing, it would still indicate that brain mechanisms outside of the core left hemisphere language network are capable of supporting combinatorial seman- tics, thus underscoring our claim that language and nonverbal event semantics are neurally dissociable. Future work should further investigate the nature of the language network’s responses to non- verbal stimuli. Although some studies, like ours, have reported that the left hemisphere language regions have stronger responses to sentences than to content-matched pictures (Amit et al., 2017), others have reported the opposite preference (Jouen et al., 2015). The divergent result in Jouen et al. (2015) is most likely due to differences in the analytic approach, namely, in the use of ROIs derived from group analyses as opposed to functionally defined fROIs. Task de- mands could also contribute to the difference in results: Jouen et al. used a one-back memory task (no condition-specific behavioral results were reported), whereas we used a plausibility judgment task that had similar accuracies and reaction times between the sentences and pic- tures. The fact that we found an interaction between input type (sentences vs. pictures) and task also indicates that task effects on activity in the language network merit additional investigation (although see Cheung et al., 2020, for evidence that task demands often have little effect on the responses of the language regions to verbal stimuli). The task effects observed in our study cannot be explained by task difficulty: The participants’ accuracies for the semantic versus perceptual task were not significantly different; the reaction times were slightly faster for the perceptual task, but the effect size was small (0.06 s, with average trial RT = 1.21 s) and therefore unlikely to fully account for the neural effect. Moreover, the language network is not generally driven by task difficulty (Diachek et al., 2020) and shows strong, consistent responses even in the absence of task (Baldassano et al., 2018; Brennan et al., 2016; Huth et al., 2016; Scott et al., 2017; Shain et al., 2020; Wehbe et al., 2014, among others). Thus, future work needs to explore the effects of task content rather than task difficulty per se. Implications for Theories of Semantics in the Brain In this paper, we focused on the role of the language network in nonverbal event semantics, not on the question of which cognitive and neural mechanisms support modality-invariant event Neurobiology of Language 192 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u n o / l / l a r t i c e - p d f / / / / 2 2 1 7 6 1 8 9 7 4 7 9 n o _ a _ 0 0 0 3 0 p d . / l f b y g u e s t t o n 0 7 S e p e m b e r 2 0 2 3 The language network and event semantics processing (we report those analyses in other work; Ivanova et al., 2021). Nonetheless, current results also bear on general theories of semantic processing in the mind and brain. Many current theories of semantics highlight broad anatomical areas implicated in linguistic processing as putative semantic hubs. Those include left AngG (e.g., Binder & Desai, 2011), left inferior frontal cortex (e.g., Hagoort & van Berkum, 2007), and the anterior temporal lobes (ATL; e.g., Patterson et al., 2007). However, the areas in question are large patches of cortex that are structurally and functionally heterogeneous: As a result, simply because a visual-semantics study reports activation within the left IFG or AngG does not mean that the language-responsive portions of those broad areas are at play (see, e.g., Fedorenko & Blank, 2020, for discussion). In the current study, the language-responsive fROIs that we defined within left AngG, left in- ferior frontal cortex, and left ATL all responded more strongly during the semantic task than during the perceptual task, for both sentences and pictures. Although this pattern is consistent with evidence of their general involvement in semantic processing, it goes against some of the specific claims made in the literature. For example, our results are inconsistent with the claim that the angular gyrus is the primary region involved in event semantics (Binder & Desai, 2011; cf. Williams et al., 2017) given that other regions show a similar functional response profile. That said, the fROI in the angular gyrus was the only one that showed numerically stronger responses to pictures than to sentences, consistent with evidence of its involvement in process- ing (at least some) semantically meaningful nonverbal stimuli (Amit et al., 2017; Baldassano et al., 2017; Fairhall & Caramazza, 2013; Handjaras et al., 2017; Pritchett et al., 2018). Our results also provide some evidence that a portion of the left ATL is engaged in processing event-level representations in verbal stimuli (Jackson et al., 2015; Teige et al., 2019; cf. Lewis et al., 2015; Schwartz et al., 2011; Xu et al., 2018, who claim that the ATL is involved in retrieving property-level but not event-level information). Finally, we observed that the ATL language fROI responded more strongly to sentences than to pictures, which might speak against its role as an amodal semantic hub. Note, however, that this fROI encompasses only a small fraction of left ATL; it therefore remains possible that some other parts of the ATL— especially its ventral/ventromedial portions—have a modality-invariant response profile (Lambon Ralph et al., 2017; Visser et al., 2012). In addition, our findings contribute to the body of work on the neural representation of agent–patient relationships. Previous experiments attempting to localize brain regions that sup- port thematic role processing have attributed the processing of agent–patient relations to the left hemisphere. Frankland and Greene (2015, 2020) used sentence stimuli to isolate distinct areas in left superior temporal sulcus (STS) that are sensitive to the identity of the agent versus the patient. J. Wang et al. (2016) found that the same (or nearby) STS regions also contained information about thematic roles in videos depicting agent–patient interactions. However, the latter study identified a number of other regions that were sensitive to thematic role informa- tion, including clusters in right posterior middle temporal gyrus and right angular gyrus, sug- gesting that left STS is not the only region implicated in thematic role processing. A similar distributed pattern was also reported in a neuropsychological study (Wu et al., 2007) that found that lesions to mid-STS led to difficulties in extracting thematic role information from both sentences and pictures; however, deficits in visual agent–patient processing were additionally associated with lesions in anterior superior temporal gyrus, supramarginal gyrus, and inferior frontal cortex, which casts further doubt on the unique role of the left STS in agent–patient relation processing. In sum, the evidence to date suggests that parts of the left STS may play a role in processing linguistic information, including thematic relations (Frankland & Greene, 2015, 2020) and verb argument structure (Elli et al., 2019; Williams et al., 2017), but addi- tional brain regions support the processing of event participant roles in nonverbal stimuli. Neurobiology of Language 193 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u n o / l / l a r t i c e - p d f / / / / 2 2 1 7 6 1 8 9 7 4 7 9 n o _ a _ 0 0 0 3 0 p d . / l f b y g u e s t t o n 0 7 S e p e m b e r 2 0 2 3 The language network and event semantics Finally, our results are generally consistent with a distributed view of semantic representa- tions (McClelland & Rogers, 2003; Tyler & Moss, 2001). Multiple recent studies found that semantic information is not uniquely localized to any given brain region but rather distributed across the cortex (e.g., Anderson et al., 2017; Huth et al., 2016; Pereira et al., 2018; X. Wang et al., 2018). Distributing information across a network of regions in both left and right hemi- spheres enables the information to be preserved in case of brain damage (Schapiro et al., 2013), which would explain why patients with global aphasia preserve the ability to interpret visually presented events. That said, the findings reported here do not speak to the question of whether such representations rely primarily on sensorimotor areas (Barsalou, 2008; Pulvermuller, 1999) or on associative areas (Mahon, 2015; Mahon & Caramazza, 2008). Implications for Neuroimaging Studies of Amodal Semantics The non-causal nature of the language network activation during a nonverbal semantic task has important implications for the study of amodal/multimodal concept representations. A signifi- cant body of work has aimed to isolate amodal representations of concepts by investigating the overlap between regions active during the viewing of verbal and nonverbal stimuli (Bright et al., 2004; Devereux et al., 2013; Fairhall & Caramazza, 2013; Handjaras et al., 2017; Sevostianov et al., 2002; Thierry & Price, 2006; Vandenberghe et al., 1996; Visser et al., 2012; A. D. Wagner et al., 1997). Most of these overlap-based studies have attributed semantic processing to frontal, temporal, and/or parietal regions within the left hemisphere. Our work, however, demonstrates that, even though meaningful linguistic and visual stimuli evoke over- lapping activity in left-lateralized frontal and temporal regions, conceptual information about events persists even when most of these regions are damaged. Thus, overlapping areas of activation for verbal and nonverbal semantic tasks observed in brain imaging studies do not necessarily play a causal role in amodal event semantics. Overall, our study emphasizes the importance of investigating combinatorial semantic pro- cessing using both verbal and nonverbal stimuli. Our results show that semantic processing of visually presented events does not require the language network, drawing a sharp distinction between language and nonverbal event semantics and highlighting the necessity to charac- terize the relationship between them in greater detail using a combination of brain imaging and patient evidence. ACKNOWLEDGMENTS We would like to acknowledge the Athinoula A. Martinos Imaging Center at the McGovern Institute for Brain Research at MIT, and its support team (Steve Shannon and Atsushi Takahashi). We thank Birgit Zimmerer for creating the picture stimuli used in both experiments, Chloe Bustin for norming the stimuli, Lily Jordan for help with the behavioral piloting of the fMRI experiment, and EvLab members for their help with fMRI data collection. Evelina Fedorenko was supported by NIH awards R00-HD057522, R01-DC016607, and R01-DC016950, by a grant from the Simons Foundation to the Simons Center for the Social Brain at MIT, and by funds from BCS and the McGovern Institute for Brain Research at MIT. Rosemary Varley was supported by Arts and Humanities Research Council and Alzheimer’s Society awards. FUNDING INFORMATION Evelina Fedorenko, National Institutes of Health (http://dx.doi.org/10.13039/100000002), Award ID: R00-HD057522. Evelina Fedorenko, National Institutes of Health (http://dx.doi.org/10.13039 /100000002), Award ID: R01-DC016607. Evelina Fedorenko, National Institutes of Health Neurobiology of Language 194 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u n o / l / l a r t i c e - p d f / / / / 2 2 1 7 6 1 8 9 7 4 7 9 n o _ a _ 0 0 0 3 0 p d . / l f b y g u e s t t o n 0 7 S e p e m b e r 2 0 2 3 The language network and event semantics (http://dx.doi.org/10.13039/100000002), Award ID: R01-DC016950. Evelina Fedorenko, Simons Foundation (http://dx.doi.org/10.13039/100000893). Evelina Fedorenko, McGovern Institute for Brain Research at MIT. Evelina Fedorenko, Massachusetts Institute of Technology (http://dx.doi .org/10.13039/100006919). Rosemary Varley, Arts and Humanities Research Council (http://dx .doi.org/10.13039/501100000267). Rosemary Varley, Alzheimer’s Society. AUTHOR CONTRIBUTIONS Anna Ivanova: Data curation; Formal analysis; Investigation; Software; Validation; Visualization; Writing – original draft: preparation. Zachary Mineroff: Data curation; Formal analysis; Investigation; Software. Vitor Zimmerer: Conceptualization; Data curation; Investigation; Methodology; Writing – review & editing. Nancy Kanwisher: Conceptualization; Supervision, Writing – review & editing. Rosemary Varley: Conceptualization; Funding acquisition; Methodology; Project administration; Resources; Supervision; writing – review & editing. Evelina Fedorenko: Conceptualization; Funding acquisition; Methodology; Project administration; Resources; Supervision; writing – review & editing. REFERENCES Altshuler, D., Parsons, T., & Schwarzschild, R. (2019). A course in semantics. MIT Press. Amalric, M., & Dehaene, S. (2019). A distinct cortical network for mathematical knowledge in the human brain. NeuroImage, 189, 19–31. DOI: https://doi.org/10.1016/j.neuroimage.2019.01.001, PMID: 30611876 Amit, E., Hoeflin, C., Hamzah, N., & Fedorenko, E. (2017). An asymmetrical relationship between verbal and visual thinking: Converging evidence from behavior and fMRI. NeuroImage, 152, 619–627. DOI: https://doi.org/10.1016/j.neuroimage.2017 .03.029, PMID: 28323162, PMCID: PMC5448978 Anderson, A. J., Binder, J. R., Fernandino, L., Humphries, C. J., Conant, L. L., Aguilar, M., Wang, X., Doko, D., & Raizada, R. D. S. (2017). Predicting neural activity patterns associated with sentences using a neurobiologically motivated model of semantic representation. Cerebral Cortex, 27(9), 4379–4395. DOI: https://doi.org/10.1093/cercor/bhw240, PMID: 27522069 Antonucci, S. M., & Reilly, J. (2008). Semantic memory and language processing: A primer. Seminars in Speech and Language, 29(1), 5–17. DOI: https://doi.org/10.1055/s-2008-1061621, PMID: 18348088 Baldassano, C., Chen, J., Zadbood, A., Pillow, J. W., Hasson, U., & Norman, K. A. (2017). Discovering event structure in continuous narrative perception and memory. Neuron, 95(3), 709–721.e5. DOI: https://doi.org/10.1016/j.neuron.2017.06.041, PMID: 28772125, PMCID: PMC5558154 Baldassano, C., Hasson, U., & Norman, K. A. (2018). Representation of real-world event schemas during narrative perception. Journal of Neuroscience, 38(45), 9689–9699. DOI: https://doi.org /10.1523/JNEUROSCI.0251-18.2018, PMID: 30249790, PMCID: PMC6222059 Barsalou, L. W. (2008). Grounded cognition. Annual Review of Psychology, 59(1), 617–645. DOI: https://doi.org/10.1146/annurev .psych.59.103006.093639, PMID: 17705682 Basso, A., & Capitani, E. (1985). Spared musical abilities in a con- ductor with global aphasia and ideomotor apraxia. Journal of Neurology, Neurosurgery, and Psychiatry, 48(5), 407–412. DOI: https://doi.org/10.1136/jnnp.48.5.407, PMID: 2582094, PMCID: PMC1028326 Bates, D., Mächler, M., Bolker, B., & Walker, S. (2015). Fitting linear mixed-effects models using lme4. Journal of Statistical Software, 67(1), 1–48. DOI: https://doi.org/10.18637/jss.v067.i01 Benjamini, Y., & Hochberg, Y. (1995). Controlling the false discovery rate: A practical and powerful approach to multiple testing. Journal of the Royal Statistical Society. Series B (Methodological), 57(1), 289–300. DOI: https://doi.org/10.1111/j.2517-6161.1995 .tb02031.x Berwick, R. C., & Chomsky, N. (2016). Why only us: Language and evolution. MIT Press. DOI: https://doi.org/10.7551/mitpress /9780262034241.001.0001 Bi, Y., Wei, T., Wu, C., Han, Z., Jiang, T., & Caramazza, A. (2011). The role of the left anterior temporal lobe in language processing revisited: Evidence from an individual with ATL resection. Cortex, 47(5), 575–587. DOI: https://doi.org/10.1016/j.cortex .2009.12.002, PMID: 20074721 Bickerton, D. (1990). Language & species. University of Chicago Press. DOI: https://doi.org/10.7208/chicago/9780226220949 .001.0001 Binder, J. R., & Desai, R. H. (2011). The neurobiology of semantic memory. Trends in Cognitive Sciences, 15(11), 527–536. DOI: https://doi.org/10.1016/j.tics.2011.10.001, PMID: 22001867, PMCID: PMC3350748 Binder, J. R., Desai, R. H., Graves, W. W., & Conant, L. L. (2009). Where is the semantic system? A critical review and meta- analysis of 120 functional neuroimaging studies. Cerebral Cortex, 19(12), 2767–2796. DOI: https://doi.org/10.1093/cercor/bhp055, PMID: 19329570, PMCID: PMC2774390 Binney, R. J., Henry, M. L., Babiak, M., Pressman, P. S., Santos-Santos, M. A., Narvid, J., Mandelli, M. L., Strain, P. J., Miller, B. L., Rankin, K. P., Rosen, H. J., & Gorno-Tempini, M. L. (2016). Reading words and other people: A comparison of exception word, familiar face and affect processing in the left and right temporal variants of primary progressive aphasia. Cortex, 82, 147–163. DOI: https:// doi.org/10.1016/j.cortex.2016.05.014, PMID: 27389800, PMCID: PMC4969161 Blank, I. A., Kanwisher, N., & Fedorenko, E. (2014). A functional dissociation between language and multiple-demand systems revealed in patterns of BOLD signal fluctuations. Journal of Neurobiology of Language 195 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u n o / l / l a r t i c e - p d f / / / / 2 2 1 7 6 1 8 9 7 4 7 9 n o _ a _ 0 0 0 3 0 p d / . l f b y g u e s t t o n 0 7 S e p e m b e r 2 0 2 3 The language network and event semantics Neurophysiology, 112(5), 1105–1118. DOI: https://doi.org/10 .1152/jn.00884.2013, PMID: 24872535, PMCID: PMC4122731 Blank, I. A., Kiran, S., & Fedorenko, E. (2017). Can neuroimaging help aphasia researchers? Addressing generalizability, variability, and interpretability. Cognitive Neuropsychology, 34(6), 377–393. DOI: https://doi.org/10.1080/02643294.2017.1402756, PMID: 29188746, PMCID: PMC6157596 Bookheimer, S. (2002). Functional MRI of language: New ap- proaches to understanding the cortical organization of semantic processing. Annual Review of Neuroscience, 25(1), 151–188. DOI: https://doi.org/10.1146/annurev.neuro.25.112701.142946, PMID: 12052907 Braga, R. M., Van Dijk, K. R. A., Polimeni, J. R., Eldaief, M. C., & Buckner, R. L. (2019). Parallel distributed networks resolved at high resolution reveal close juxtaposition of distinct regions. Journal of Neurophysiology, 121(4), 1513–1534. DOI: https://doi.org/10 .1152/jn.00808.2018, PMID: 30785825, PMCID: PMC6485740 Brennan, J. R., Stabler, E. P., Van Wagenen, S. E., Luh, W.-M., & Hale, J. T. (2016). Abstract linguistic structure correlates with temporal activity during naturalistic comprehension. Brain and Language, 157–158, 81–94. DOI: https://doi.org/10.1016 /j.bandl.2016.04.008, PMID: 27208858, PMCID: PMC4893969 Bright, P., Moss, H., & Tyler, L. K. (2004). Unitary vs multiple seman- tics: PET studies of word and picture processing. Brain and Language, 89(3), 417–432. DOI: https://doi.org/10.1016/j.bandl .2004.01.010, PMID: 15120534 Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter, C., … Amodei, D. (2020). Language models are few-shot learners. ArXiv:2005.14165 [Cs]. http://arxiv.org/abs/2005.14165 Carlson, G. N., & Tanenhaus, M. K. (1988). Thematic roles and language comprehension. In W. Wilkins (Ed.), Thematic relations (pp. 263–288). Brill. DOI: https://doi.org/10.1163/9789004373211_015 Carruthers, P. (2002). The cognitive functions of language. The Behavioral and Brain Sciences, 25(6), 657–674; discussion 674–725. DOI: https://doi.org/10.1017/s0140525x02000122, PMID: 14598623 Chen, G., Taylor, P. A., & Cox, R. W. (2017). Is the statistic value all we should care about in neuroimaging? NeuroImage, 147, 952–959. DOI: https://doi.org/10.1016/j.neuroimage.2016.09.066, PMID: 27729277, PMCID: PMC6591724 Chen, X., Affourtit, J., Norman-Haignere, S., Jouravlev, O., Malik- Moraleda, S., Kean, H. H., Regev, T., McDermott, J. H., & Fedorenko, E. (2021). The fronto-temporal language system does not support the processing of music [Manuscript in preparation]. Department of Brain and Cognitive Sciences, MIT, and Division of Psychology and Language Sciences, University College London. Chertkow, H., Bub, D., Deaudon, C., & Whitehead, V. (1997). On the status of object concepts in aphasia. Brain and Language, 58(2), 203–232. DOI: https://doi.org/10.1006/brln.1997.1771, PMID: 9182748 Cheung, C., Ivanova, A. A., Siegelman, M., Pongos, A. L. A., Kean, H. H., & Fedorenko, E. (2020). The effect of task on sentence processing in the brain [Poster presentation]. The Society for the Neurobiology of Language. https://www.neurolang.org/ Chomsky, N. (2007). Biolinguistic explorations: Design, develop- ment, evolution. International Journal of Philosophical Studies, 15(1), 1–21. DOI: https://doi.org/10.1080/09672550601143078 Clark, E. V. (2004). How language acquisition builds on cognitive development. Trends in Cognitive Sciences, 8(10), 472–478. DOI: https://doi.org/10.1016/j.tics.2004.08.012, PMID: 15450512 Coco, M. I., Nuthmann, A., & Dimigen, O. (2020). Fixation-related brain potentials during semantic integration of object-scene information. Journal of Cognitive Neuroscience, 32(4), 571–589. DOI: https://doi.org/10.1162/jocn_a_01504, PMID: 31765602 Cohn, N. (2020). Your brain on comics: A cognitive model of visual narrative comprehension. Topics in Cognitive Science, 12(1), 352–386. DOI: https://doi.org/10.1111/tops.12421, PMID: 30963724 Cohn, N., & Paczynski, M. (2013). Prediction, events, and the advantage of Agents: The processing of semantic roles in visual narrative. Cognitive Psychology, 67(3), 73–97. DOI: https://doi .org/10.1016/j.cogpsych.2013.07.002, PMID: 23959023, PMCID: PMC3895484 Colvin, M., Warren, T., & Dickey, M. W. (2019). Event knowledge and verb knowledge predict sensitivity to different aspects of semantic anomalies in aphasia. In K. Carlson, C. Clifton, Jr., & J. D. Fodor (Eds.), Grammatical approaches to language pro- cessing: Essays in honor of Lyn Frazier (pp. 241–259). Springer International Publishing. DOI: https://doi.org/10.1007/978-3 -030-01563-3_13 Connell, L., & Lynott, D. (2013). Flexible and fast: Linguistic shortcut affects both shallow and deep conceptual processing. Psychonomic Bulletin & Review, 20(3), 542–550. DOI: https:// doi.org/10.3758/s13423-012-0368-x, PMID: 23307559 Crawford, J. R., & Howell, D. C. (1998). Comparing an individual’s test score against norms derived from small samples. The Clinical Neuropsychologist, 12(4), 482–486. DOI: https://doi.org/10 .1076/clin.12.4.482.7241 Dahan, D., Magnuson, J. S., & Tanenhaus, M. K. (2001). Time course of frequency effects in spoken-word recognition: Evidence from eye movements. Cognitive Psychology, 42(4), 317–367. DOI: https://doi.org/10.1006/cogp.2001.0750, PMID: 11368527 Davidson, D. (1975). Thought and talk. In S. D. Guttenplan (Ed.), Mind and language (pp. 7–23). Clarendon Press. Della Sala, S., Gray, C., Baddeley, A., Allamano, N., & Wilson, L. (1999). Pattern span: A tool for unwelding visuo-spatial memory. Neuropsychologia, 37(10), 1189–1199. DOI: https://doi.org /10.1016/s0028-3932(98)00159-6, PMID: 10509840 Devereux, B. J., Clarke, A., Marouchos, A., & Tyler, L. K. (2013). Representational similarity analysis reveals commonalities and differences in the semantic processing of words and objects. The Journal of Neuroscience, 33(48), 18906–18916. DOI: https://doi .org/10.1523/ JNEUROSCI.3809-13.2013, PMID: 24285896, PMCID: PMC3852350 Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2018). BERT: Pre-training of deep bidirectional transformers for language under- standing. ArXiv:1810.04805 [Cs]. http://arxiv.org/abs/1810.04805 Diachek, E., Blank, I., Siegelman, M., Affourtit, J., & Fedorenko, E. (2020). The domain-general multiple demand (MD) network does not support core aspects of language comprehension: A large-scale fMRI investigation. Journal of Neuroscience, 40(23), 4536–4550. DOI: https://doi.org/10.1523/ JNEUROSCI.2036 -19.2020, PMID: 32317387, PMCID: PMC7275862 Dickey, M. W., & Warren, T. (2015). The influence of event-related kn ow le dge on verb -arg um ent p r oc essing in ap ha sia. Neuropsychologia, 67, 63–81. DOI: https://doi.org/10.1016/j .neuropsychologia.2014.12.003, PMID: 25484306, PMCID: PMC4297691 Dresang, H. C., Dickey, M. W., & Warren, T. C. (2019). Semantic memory for objects, actions, and events: A novel test of event-related con- ceptual semantic knowledge. Cognitive Neuropsychology, 36(7–8), 313–335. DOI: https://doi.org/10.1080/02643294.2019.1656604, PMID: 31451020, PMCID: PMC7042074 Elli, G. V., Lane, C., & Bedny, M. (2019). A double dissociation in sensitivity to verb and noun semantics across cortical networks. Cerebral Cortex, 29(11), 4803–4817. DOI: https://doi.org/10.1093 /cercor/bhz014, PMID: 30767007, PMCID: PMC6917520 Neurobiology of Language 196 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u n o / l / l a r t i c e - p d f / / / / 2 2 1 7 6 1 8 9 7 4 7 9 n o _ a _ 0 0 0 3 0 p d / . l f b y g u e s t t o n 0 7 S e p e m b e r 2 0 2 3 The language network and event semantics Estes, Z., Golonka, S., & Jones, L. L. (2011). Thematic thinking: The apprehension and consequences of thematic relations. In B. H. Ross (Ed.), The psychology of learning and motivation: Advances in research and theory (pp. 249–294). Elsevier Academic Press. DOI: https://doi.org/10.1016/B978-0-12-385527-5.00008-5 Fairhall, S. L., & Caramazza, A. (2013). Brain regions that represent amodal conceptual knowledge. Journal of Neuroscience, 33(25), 10552–10558. DOI: https://doi.org/10.1523/JNEUROSCI.0051 -13.2013, PMID: 23785167, PMCID: PMC6618586 Fang, Y., Chen, Q., Lingnau, A., Han, Z., & Bi, Y. (2016). Areas recruited during action understanding are not modulated by auditory or sign language experience. Frontiers in Human Neuroscience, 10, 94. DOI: https://doi.org/10.3389/fnhum .2016.00094, PMID: 27014025, PMCID: PMC4781852 Fedorenko, E., Bers, M. U., & Kanwisher, N. (2011). Functional specificity for high-level linguistic processing in the human brain. Proceedings of the National Academy of Sciences, 108(39), 16428–16433. DOI: https://doi.org/10.1073/pnas.1112937108, PMID: 21885736, PMCID: PMC3182706 Fedorenko, E., & Blank, I. A. (2020). Broca’s area is not a natural kind. Trends in Cognitive Sciences, 24(4), 270–284. DOI: https:// doi.org/10.1016/j.tics.2020.01.001, PMID: 32160565, PMCID: PMC7211504 Fedorenko, E., Hsieh, P.-J., Nieto-Castañón, A., Whitfield-Gabrieli, S., & Kanwisher, N. (2010). New method for fMRI investigations of language: Defining ROIs functionally in individual subjects. Journal of Neurophysiology, 104(2), 1177–1194. DOI: https:// doi.org/10.1152/jn.00032.2010, PMID: 20410363, PMCID: PMC2934923 Fedorenko, E., & Kanwisher, N. (2009). Neuroimaging of language: Why hasn’t a clearer picture emerged? Language and Linguistics Compass, 3(4), 839–865. DOI: https://doi.org/10.1111/j.1749-818X .2009.00143.x Fedorenko, E., & Varley, R. A. (2016). Language and thought are not the same thing: Evidence from neuroimaging and neurological patients. Annals of the New York Academy of Sciences, 1369(1), 132–153. DOI: https://doi.org/10.1111/nyas.13046, PMID: 27096882, PMCID: PMC4874898 Fillmore, C. J. (1968). Lexical entries for verbs. Foundations of Language 4(1968), 373–393. Fillmore, C. J. (2002). Form and meaning in language: Vol. I, Papers on semantic roles (74th ed.). Center for the Study of Language and Information. Fillmore, C. J. (2006). Frame semantics. In D. Geeraerts (Ed.), Cognitive linguistics: Basic readings (pp. 373–400). Mouton de Gruyter. DOI: https://doi.org/10.1515/9783110199901.373 Fischer, J., Mikhael, J. G., Tenenbaum, J. B., & Kanwisher, N. (2016). Functional neuroanatomy of intuitive physical inference. Proceedings of the National Academy of Sciences, 113(34), E5072–E5081. DOI: https://doi.org/10.1073/pnas.1610344113, PMID: 27503892, PMCID: PMC5003259 Forster, K. I., & Forster, J. C. (2003). DMDX: A Windows display program with millisecond accuracy. Behavior Research Methods, Instruments, & Computers, 35(1), 116–124. DOI: https://doi.org /10.3758/BF03195503, PMID: 12723786 Frankland, S. M., & Greene, J. D. (2015). An architecture for encoding sentence meaning in left mid-superior temporal cortex. Proceedings of the National Academy of Sciences, 112(37), 11732–11737. DOI: https://doi.org/10.1073/pnas.1421236112, PMID: 26305927, PMCID: PMC4577152 Frankland, S. M., & Greene, J. D. (2020). Two ways to build a thought: Distinct forms of compositional semantic representation across brain regions. Cerebral Cortex, 30(6), 3838–3855. DOI: https://doi.org/10.1093/cercor/bhaa001, PMID: 32279078 Franklin, S., Turner, J. E., & Ellis, A.W. (1992). The ADA auditory comprehension battery. University of York. Friederici, A. D. (2020). Hierarchy processing in human neurobiol- ogy: How specific is it? Philosophical Transactions of the Royal Society B: Biological Sciences, 375(1789), 20180391. DOI: https:// doi.org/10.1098/rstb.2018.0391, PMID: 31735144, PMCID: PMC6895560 Frost, M. A., & Goebel, R. (2012). Measuring structural-functional correspondence: Spatial variability of specialised brain regions after macro-anatomical alignment. NeuroImage, 59(2), 1369–1381. DOI: https://doi.org/10.1016/j.neuroimage.2011.08.035, PMID: 21875671 Gainotti, G. (2011). The organization and dissolution of semantic- conceptual knowledge: Is the “amodal hub” the only plausible model? Brain and Cognition, 75(3), 299–309. DOI: https://doi .org/10.1016/j.bandc.2010.12.001, PMID: 21211892 Gainotti, G. (2015). Is the difference between right and left ATLs due to the distinction between general and social cognition or between verbal and non-verbal representations? Neuroscience & Biobehavioral Reviews, 51, 296–312. DOI: https://doi.org /10.1016/j.neubiorev.2015.02.004, PMID: 25697904 Goldstein, I., & Papert, S. (1977). Artificial intelligence, language, and the study of knowledge. Cognitive Science, 1(1), 84–123. DOI: https://doi.org/10.1016/S0364-0213(77)80006-2 Gorno-Tempini, M. L., Rankin, K. P., Woolley, J. D., Rosen, H. J., Phengrasamy, L., & Miller, B. L. (2004). Cognitive and behavioral profile in a case of right anterior temporal lobe neurodegenera- tion. Cortex, 40(4), 631–644. DOI: https://doi.org/10.1016/S0010 -9452(08)70159-X, PMID: 15505973 Greene, M. R., & Fei-Fei, L. (2014). Visual categorization is automatic and obligatory: Evidence from Stroop-like paradigm. Journal of Vision, 14(1). DOI: https://doi.org/10.1167/14.1.14, PMID: 24434626 Gruber, J. S. (1965). Studies in lexical relations [Unpublished doc- toral thesis]. Massachusetts Institute of Technology. Häberling, I. S., Corballis, P. M., & Corballis, M. C. (2016). Language, gesture, and handedness: Evidence for independent lateralized networks. Cortex, 82, 72–85. DOI: https://doi.org/10 .1016/j.cortex.2016.06.003, PMID: 27367793 Hafri, A., Trueswell, J. C., & Strickland, B. (2018). Encoding of event roles from visual scenes is rapid, spontaneous, and inter- acts with higher-level visual processing. Cognition, 175, 36–52. DOI: https://doi.org/10.1016/j.cognition.2018.02.011, PMID: 29459238, PMCID: PMC5879027 Hagoort, P., & van Berkum, J. (2007). Beyond the sentence given. Philosophical Transactions of the Royal Society B: Biological Sciences, 362(1481), 801–811. DOI: https://doi.org/10.1098 /rstb.2007.2089, PMID: 17412680, PMCID: PMC2429998 Handjaras, G., Leo, A., Cecchetti, L., Papale, P., Lenci, A., Marotta, G., Pietrini, P., & Ricciardi, E. (2017). Modality-independent encoding of individual concepts in the left parietal cortex. Neuropsychologia, 105, 39–49. DOI: https://doi.org/10.1016 /j.neuropsychologia.2017.05.001, PMID: 28476573 Hasson, U., Chen, J., & Honey, C. J. (2015). Hierarchical process memory: Memory as an integral component of information pro- cessing. Trends in Cognitive Sciences, 19(6), 304–313. DOI: https://doi.org/10.1016/j.tics.2015.04.006, PMID: 25980649, PMCID: PMC4457571 Hauser, M. D., Chomsky, N., & Fitch, W. T. (2002). The faculty of language: What is it, who has it, and how did it evolve? Science, 298(5598), 1569–1579. DOI: https://doi.org/10.1126/science .298.5598.1569, PMID: 12446899 Hinzen, W. (2013). Narrow syntax and the language of thought. Philosophical Psychology, 26(1), 1–23. DOI: https://doi.org/10 .1080/09515089.2011.627537 Neurobiology of Language 197 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u n o / l / l a r t i c e - p d f / / / / 2 2 1 7 6 1 8 9 7 4 7 9 n o _ a _ 0 0 0 3 0 p d / . l f b y g u e s t t o n 0 7 S e p e m b e r 2 0 2 3 The language network and event semantics Howard, D., & Patterson, K. (1992). The pyramids and palm trees test: A test of semantic access from words and pictures. Pearson Assessment. Hu, Z., Yang, H., Yang, Y., Nishida, S., Madden-Lombardi, C., Ventre- Dominey, J., Dominey, P. F., & Ogawa, K. (2019). Common neural system for sentence and picture comprehension across languages: A Chinese–Japanese bilingual study. Frontiers in Human Neuroscience, 13. DOI: https://doi.org/10.3389/fnhum.2019 .00380, PMID: 31708762, PMCID: PMC6823717 Huth, A. G., de Heer, W. A., Griffiths, T. L., Theunissen, F. E., & Gallant, J. L. (2016). Natural speech reveals the semantic maps that tile human cerebral cortex. Nature, 532(7600), 453–458. DOI: https://doi.org/10.1038/nature17637, PMID: 27121839, PMCID: PMC4852309 Ivanova, A. A., Srikant, S., Sueoka, Y., Kean, H. H., Dhamala, R., O’Reilly, U.-M., Bers, M. U., & Fedorenko, E. (2020). Comprehension of computer code relies primarily on domain- general executive resources. BioRxiv, 2020.04.16.045732. DOI: https://doi.org/10.1101/2020.04.16.045732 Ivanova, A. A., Kauf, C., Goldhaber, T., Kean, H. H., Mineroff, Z., Balewski, Z., Varley, R., Kanwisher, N., & Fedorenko, E. (2021). The neural basis of crossmodal event semantics [Manuscript in preparation]. Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology. Jackendoff, R. (1987). The status of thematic relations in linguistic theory. Linguistic Inquiry, 18(3), 369–411. Jackendoff, R. (1996). How language helps us think. Pragmatics & Cognition, 4(1), 1–34. DOI: https://doi.org/10.1075/pc.4.1.03jac Jackson, R. L., Hoffman, P., Pobric, G., & Lambon Ralph, M. A. (2015). The nature and neural correlates of semantic association versus conceptual similarity. Cerebral Cortex, 25(11), 4319–4333. DOI: https://doi.org/10.1093/cercor/bhv003, PMID: 25636912, PMCID: PMC4816784 Jefferies, E., & Lambon Ralph, M. A. (2006). Semantic impairment in stroke aphasia versus semantic dementia: A case-series com- parison. Brain, 129(8), 2132–2147. DOI: https://doi.org/10.1093 /brain/awl153, PMID: 16815878 Jouen, A.-L., Cazin, N., Hidot, S., Madden-Lombardi, C., Ventre- Dominey, J., & Dominey, P. F. (2019). Beyond the word and image: III. Neurodynamic properties of the semantic network. BioRxiv, 767384. DOI: https://doi.org/10.1101/767384 Jouen, A.-L., Ellmore, T. M., Madden, C. J., Pallier, C., Dominey, P. F., & Ventre-Dominey, J. (2015). Beyond the word and image: Characteristics of a common meaning system for language and vision revealed by functional and structural imaging. NeuroImage, 106, 72–85. DOI: https://doi.org/10.1016/j.neuroimage .2014.11.024, PMID: 25463475 Jouravlev, O., Zheng, D., Balewski, Z., Pongos, A. L. A., Levan, Z., Goldin-Meadow, S., & Fedorenko, E. (2019). Speech-accompanying gestures are not processed by the language-processing mechanisms. Neuropsychologia, 132, 107132. DOI: https://doi.org/10.1016 /j.neuropsychologia.2019.107132, PMID: 31276684, PMCID: PMC6708375 Julian, J. B., Fedorenko, E., Webster, J., & Kanwisher, N. (2012). An algorithmic method for functionally defining regions of interest in the ventral visual pathway. NeuroImage, 60(4), 2357–2364. DOI: https://doi.org/10.1016/j.neuroimage.2012.02.055, PMID: 22398396 Kay, J., Lesser, R., & Coltheart, M. (1992). Psycholinguistic assess- ments of language processing in aphasia. Lawrence Erlbaum. Kutas, M., & Federmeier, K. D. (2011). Thirty years and counting: Finding meaning in the N400 component of the event-related brain potential (ERP). Annual Review of Psychology, 62(1), 621–647. DOI: https://doi.org/10.1146/annurev.psych .093008.131123, PMID: 20809790, PMCID: PMC4052444 Kutas, M., & Hillyard, S. A. (1980). Reading senseless sentences: Brain potentials reflect semantic incongruity. Science, 207(4427), 203–205. DOI: https://doi.org/10.1126/science.7350657, PMID: 7350657 Kuznetsova, A., Brockhoff, P. B., & Christensen, R. H. B. (2017). lmerTest package: Tests in linear mixed effects models. Journal of Statistical Software, 82(1), 1–26. DOI: https://doi.org/10 .18637/jss.v082.i13 Lambon Ralph, M. A., Cipolotti, L., Manes, F., & Patterson, K. (2010). Taking both sides: Do unilateral anterior temporal lobe lesions disrupt semantic memory? Brain, 133(11), 3243–3255. DOI: https://doi.org/10.1093/brain/awq264, PMID: 20952378 Lambon Ralph, M. A., Jefferies, E., Patterson, K., & Rogers, T. T. (2017). The neural and computational bases of semantic cogni- tion. Nature Reviews Neuroscience, 18(1), 42–55. DOI: https:// doi.org/10.1038/nrn.2016.150, PMID: 27881854 Lau, E. F., Phillips, C., & Poeppel, D. (2008). A cortical network for semantics: (De)constructing the N400. Nature Reviews Neuroscience, 9(12), 920–933. DOI: https://doi.org/10.1038 /nrn2532, PMID: 19020511 Lau, E. F., Weber, K., Gramfort, A., Hämäläinen, M. S., & Kuperberg, G. R. (2016). Spatiotemporal signatures of lexical- semantic prediction. Cerebral Cortex, 26(4), 1377–1387. DOI: https://doi.org/10.1093/cercor/ bhu219, PMID: 25316341, PMCID: PMC4785937 Lewis, G. A., Poeppel, D., & Murphy, G. L. (2015). The neural bases of taxonomic and thematic conceptual relations: An MEG study. Neuropsychologia. DOI: https://doi.org/10.1016 /j.neuropsychologia.2015.01.011, PMID: 25582406, PMCID: PMC4484855 Liu, Y., Kim, J., Wilson, C., & Bedny, M. (2020). Computer code comprehension shares neural resources with formal logical infer- ence in the fronto-parietal network. BioRxiv, 2020.05.24 .096180. DOI: https://doi.org/10.1101/2020.05.24.096180 Lucy, L., & Gauthier, J. (2017). Are distributional representations ready for the real world? Evaluating word vectors for grounded perceptual meaning. ArXiv:1705.11168 [Cs]. http://arxiv.org/ abs/1705.11168, DOI: https://doi.org/10.18653/v1/ W17-2810 Luria, A. R., Tsvetkova, L. S., & Futer, D. S. (1965). Aphasia in a composer ( V. G. Shebalin). Journal of the Neurological Sciences, 2(3), 288–292. DOI: https://doi.org/10.1016/0022-510x(65) 90113-9, PMID: 4860800 MacSweeney, M., Woll, B., Campbell, R., McGuire, P. K., David, A. S., Williams, S. C. R., Suckling, J., Calvert, G. A., & Brammer, M. J. (2002). Neural systems underlying British sign language and audio-visual English processing in native users. Brain, 125(7), 1583–1593. DOI: https://doi.org/10.1093/brain/awf153, PMID: 12077007 Magnuson, J. S., Tanenhaus, M. K., Aslin, R. N., & Dahan, D. (2003). The time course of spoken word learning and recogni- tion: Studies with artificial lexicons. Journal of Experimental Psychology: General, 132(2), 202–227. DOI: https://doi.org /10.1037/0096-3445.132.2.202, PMID: 12825637 Mahon, B. Z. (2015). What is embodied about cognition? Language, Cognition and Neuroscience, 30(4), 420–429. DOI: https://doi.org/10.1080/23273798.2014.987791, PMID: 25914889, PMCID: PMC4405253 Mahon, B. Z., & Caramazza, A. (2008). A critical look at the em- bodied cognition hypothesis and a new proposal for grounding conceptual content. Journal of Physiology-Paris, 102(1), 59–70. DOI: https://doi.org/10.1016/j.jphysparis.2008.03.004, PMID: 18448316 Mahowald, K., & Fedorenko, E. (2016). Reliable individual-level neural markers of high-level language processing: A necessary Neurobiology of Language 198 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u n o / l / l a r t i c e - p d f / / / / 2 2 1 7 6 1 8 9 7 4 7 9 n o _ a _ 0 0 0 3 0 p d / . l f b y g u e s t t o n 0 7 S e p e m b e r 2 0 2 3 The language network and event semantics precursor for relating neural variability to behavioral and genetic variability. NeuroImage, 139, 74–93. DOI: https://doi.org/10 .1016/j.neuroimage.2016.05.073, PMID: 27261158 Marshall, J., Pring, T., & Chiat, S. (1993). Sentence processing therapy: Working at the level of the event. Aphasiology, 7(2), 177–199. DOI: https://doi.org/10.1080/02687039308249505 Matsumoto, A., Iidaka, T., Haneda, K., Okada, T., & Sadato, N. (2005). Linking semantic priming effect in functional MRI and event-related potentials. NeuroImage, 24(3), 624–634. DOI: https://doi.org/10.1016/j.neuroimage.2004.09.008, PMID: 15652298 McClelland, J. L., & Rogers, T. T. (2003). The parallel distributed processing approach to semantic cognition. Nature Reviews Neuroscience, 4(4), 310–322. DOI: https://doi.org/10.1038 /nrn1076, PMID: 12671647 Meyer, A. S., Belke, E., Telling, A. L., & Humphreys, G. W. (2007). Early activation of object names in visual search. Psychonomic Bulletin & Review, 14(4), 710–716. DOI: https://doi.org/10 .3758/BF03196826, PMID: 17972738 Milberg, W., & Blumstein, S. E. (1981). Lexical decision and aphasia: Evidence for semantic processing. Brain and Language, 14(2), 371–385. DOI: https://doi.org/10.1016/0093-934X(81)90086-9, PMID: 7306789 Mion, M., Patterson, K., Acosta-Cabronero, J., Pengas, G., Izquierdo-Garcia, D., Hong, Y. T., Fryer, T. D., Williams, G. B., Hodges, J. R., & Nestor, P. J. (2010). What the left and right anterior fusiform gyri tell us about semantic memory. Brain, 133(11), 3256–3268. DOI: https://doi.org/10.1093/brain/awq272, PMID: 20952377 Monti, M. M., Parsons, L. M., & Osherson, D. N. (2009). The boundaries of language and thought in deductive inference. Proceedings of the National Academy of Sciences, 106(30), 12554–12559. DOI: https://doi.org/10.1073/pnas.0902422106, PMID: 19617569, PMCID: PMC2718391 Monti, M. M., Parsons, L. M., & Osherson, D. N. (2012). Thought beyond language: Neural dissociation of algebra and natural language. Psychological Science, 23(8), 914–922. DOI: https:// doi.org/10.1177/0956797612437427, PMID: 22760883 Nieto-Castañón, A., & Fedorenko, E. (2012). Subject-specific func- tional localizers increase sensitivity and functional resolution of multi-subject analyses. NeuroImage, 63(3), 1646–1669. DOI: https://doi.org/10.1016/j.neuroimage.2012.06.065, PMID: 22784644, PMCID: PMC3477490 Oldfield, R. C. (1971). The assessment and analysis of handedness: The Edinburgh inventory. Neuropsychologia, 9(1), 97–113. DOI: https://doi.org/10.1016/0028-3932(71)90067-4, PMID: 5146491 Papafragou, A., Hulbert, J., & Trueswell, J. (2008). Does language guide event perception? Evidence from eye movements. Cognition, 108(1), 155–184. DOI: https://doi.org/10.1016/j.cognition .2008.02.007, PMID: 18395705, PMCID: PMC2810627 Patterson, K., Nestor, P. J., & Rogers, T. T. (2007). Where do you know what you know? The representation of semantic knowledge in the human brain. Nature Reviews Neuroscience, 8(12), 976–987. DOI: https://doi.org/10.1038/nrn2277, PMID: 18026167 Pereira, F., Lou, B., Pritchett, B., Ritter, S., Gershman, S. J., Kanwisher, N., Botvinick, M., & Fedorenko, E. (2018). Toward a universal decoder of linguistic meaning from brain activation. Nature Communications, 9(1), 1–13. DOI: https://doi.org/10.1038 /s41467-018-03068-4, PMID: 29511192, PMCID: PMC5840373 Pinker, S., & Levin, B. (1991). Lexical and conceptual semantics. MIT Press. Pontillo, D. F., Salverda, A. P., & Tanenhaus, M. K. (2015). Flexible use of phonological and visual memory in language-mediated visual search. In D. C. Noelle, R. Dale, A. Warlaumont, J. Yoshimi, T. Matlock, C. Jennings, & P. P. Maglio (Eds.), Proceedings of the 37th Meeting of the Cognitive Science Society. Mind, Technology, and Society (pp. 1895–1900). Curran Associates. Potter, M. C., & Faulconer, B. A. (1975). Time to understand pictures and words. Nature, 253(5491), 437–438. DOI: https:// doi.org/10.1038/253437a0, PMID: 1110787 Potter, M. C., Kroll, J. F., Yachzel, B., Carpenter, E., & Sherman, J. (1986). Pictures in sentences: Understanding without words. Journal of Experimental Psychology: General, 115(3), 281–294. DOI: https://doi.org/10.1037/0096-3445.115.3.281, PMID: 2944988 Pritchett, B. L., Hoeflin, C., Koldewyn, K., Dechter, E., & Fedorenko, E. (2018). High-level language processing regions are not engaged in action observation or imitation. Journal of Neurophysiology, 120(5), 2555–2570. DOI: https://doi.org/10.1152/jn.00222.2018, PMID: 30156457, PMCID: PMC6295536 Proverbio, A. M., & Riva, F. (2009). RP and N400 ERP components reflect semantic violations in visual processing of human actions. Neuroscience Letters, 459(3), 142–146. DOI: https://doi.org /10.1016/j.neulet.2009.05.012, PMID: 19427368 Pujol, J., Deus, J., Losilla, J. M., & Capdevila, A. (1999). Cerebral lateralization of language in normal left-handed people studied by functional MRI. Neurology, 52(5), 1038–1043. DOI: https:// doi.org/10.1212/wnl.52.5.1038, PMID: 10102425 Pulvermuller, F. (1999). Words in the brain’s language. Behavioral and Brain Sciences, 22(2), 253–279. DOI: https://doi.org/10 .1017/S0140525X9900182X Raven, J., & Raven, J. (2003). Raven Progressive Matrices. In R. S. McCallum (Ed.), Handbook of nonverbal assessment (pp. 223–237). Springer US. DOI: https://doi.org/10.1007/978-1-4615-0153-4_11 Rehrig, G., Hayes, T. R., Henderson, J. M., & Ferreira, F. (2020). When scenes speak louder than words: Verbal encoding does not mediate the relationship between scene meaning and visual attention. Memory & Cognition, 48, 1181–1195. DOI: https://doi .org/10.3758/s13421-020-01050-4, PMID: 32430889 Rissman, L., & Majid, A. (2019). Thematic roles: Core knowledge or linguistic construct? Psychonomic Bulletin & Review, 26(6), 1850–1869. DOI: https://doi.org/10.3758/s13423-019-01634-5, PMID: 31290008, PMCID: PMC6863944 Saygın, A. P., Dick, F., Wilson, S. M., Dronkers, N. F., & Bates, E. (2003). Neural resources for processing language and environ- mental sounds: Evidence from aphasia. Brain, 126(4), 928–945. DOI: https://doi.org/10.1093/brain/awg082, PMID: 12615649 Saygın, A. P., Wilson, S. M., Dronkers, N. F., & Bates, E. (2004). Action comprehension in aphasia: Linguistic and non-linguistic deficits and their lesion correlates. Neuropsychologia, 42(13), 1788–1804. DOI: https://doi.org/10.1016/j.neuropsychologia .2004.04.016, PMID: 15351628 Schapiro, A. C., McClelland, J. L., Welbourne, S. R., Rogers, T. T., & Lambon Ralph, M. A. (2013). Why bilateral damage is worse than unilateral damage to the brain. Journal of Cognitive Neuroscience, 25(12), 2107–2123. DOI: https://doi.org/10.1162/jocn_a_00441, PMID: 23806177 Schwartz, M. F., Kimberg, D. Y., Walker, G. M., Brecher, A., Faseyitan, O. K., Dell, G. S., Mirman, D., & Coslett, H. B. (2011). Neuroanatomical dissociation for taxonomic and thematic knowledge in the human brain. Proceedings of the National Academy of Sciences of the United States of America, 108(20), 8 5 2 0 – 8 5 2 4 . D O I : h t t p s : / / d o i . o r g / 1 0 . 1 0 7 3 / p n a s .1014935108, PMID: 21540329, PMCID: PMC3100928 Scott, T. L., Gallée, J., & Fedorenko, E. (2017). A new fun and robust version of an fMRI localizer for the frontotemporal language system. Cognitive Neuroscience, 8(3), 167–176. DOI: https:// doi.org/10.1080/17588928.2016.1201466, PMID: 27386919 Neurobiology of Language 199 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u n o / l / l a r t i c e - p d f / / / / 2 2 1 7 6 1 8 9 7 4 7 9 n o _ a _ 0 0 0 3 0 p d . / l f b y g u e s t t o n 0 7 S e p e m b e r 2 0 2 3 The language network and event semantics Seed, A., & Tomasello, M. (2010). Primate Cognition. Topics in Cognitive Science, 2(3), 407–419. DOI: https://doi.org/10.1111 /j.1756-8765.2010.01099.x, PMID: 25163869 Sevostianov, A., Horwitz, B., Nechaev, V., Williams, R., Fromm, S., & Braun, A. R. (2002). fMRI study comparing names versus pic- tures of objects. Human Brain Mapping, 16(3), 168–175. DOI: https://doi.org/10.1002/hbm.10037, PMID: 12112770, PMCID: PMC6871815 Shain, C., Blank, I. A., van Schijndel, M., Schuler, W., & Fedorenko, E. (2020). fMRI reveals language-specific predictive c o d i n g d u r i n g n a t u r a l i s t i c se n t e n c e co m p r e h e n s i o n . Neuropsychologia, 138, 107307. DOI: https://doi.org/10.1016 /j.neuropsychologia.2019.107307, PMID: 31874149, PMCID: PMC7140726 Shinkareva, S. V., Malave, V. L., Mason, R. A., Mitchell, T. M., & Just, M. A. (2011). Commonality of neural representations of words and pictures. NeuroImage, 54(3), 2418–2425. DOI: https://doi.org /10.1016/j.neuroimage.2010.10.042, PMID: 20974270 Sitnikova, T., Holcomb, P. J., Kiyonaga, K. A., & Kuperberg, G. R. (2008). Two neurocognitive mechanisms of semantic integration during the comprehension of visual real-world events. Journal of Cognitive Neuroscience, 20(11), 2037–2057. DOI: https://doi .org/10.1162/jocn.2008.20143, PMID: 18416681, PMCID: PMC2673092 Snowden, J. S., Harris, J. M., Thompson, J. C., Kobylecki, C., Jones, M., Richardson, A. M., & Neary, D. (2018). Semantic dementia and the left and right temporal lobes. Cortex, 107, 188–203. DOI: https:// doi.org/10.1016/j.cortex.2017.08.024, PMID: 28947063 Sokolov, A. (1972). Inner speech and thought. Springer US. DOI: https://doi.org/10.1007/978-1-4684-1914-6 Spelke, E. S. (1976). Infants’ intermodal perception of events. Cognitive Psychology, 8(4), 553–560. DOI: https://doi.org/10 .1016/0010-0285(76)90018-9 Spelke, E. S., & Kinzler, K. D. (2007). Core knowledge. Developmental Science, 10(1), 89–96. DOI: https://doi.org/10 .1111/j.1467-7687.2007.00569.x, PMID: 17181705 Strickland, B. (2017). Language reflects “core” cognition: A new theory about the origin of cross-linguistic regularities. Cognitive Science, 41(1), 70–101. DOI: https://doi.org/10.1111/cogs .12332, PMID: 26923431 Tahmasebi, A. M., Davis, M. H., Wild, C. J., Rodd, J. M., Hakyemez, H., Abolmaesumi, P., & Johnsrude, I. S. (2012). Is the link between anatomical structure and function equally strong at all cognitive levels of processing? Cerebral Cortex, 22(7), 1593–1603. DOI: https://doi.org/10.1093/cercor/bhr205, PMID: 21893681 Talmy, L. (2000). Toward a cognitive semantics (Vol. 2). MIT Press. DOI: https://doi.org/10.7551/mitpress/6848.001.0001 Tarhan, L., & Konkle, T. (2020). Sociality and interaction envelope organize visual action representations. Nature Communications, 11(1), 3002. DOI: https://doi.org/10.1038/s41467-020-16846-w, PMID: 32532982, PMCID: PMC7293348 Teige, C., Cornelissen, P. L., Mollo, G., Gonzalez Alam, T. R. del J., McCarty, K., Smallwood, J., & Jefferies, E. (2019). Dissociations in semantic cognition: Oscillatory evidence for opposing effects of semantic control and type of semantic relation in anterior and posterior temporal cortex. Cortex, 120, 308–325. DOI: https:// doi.org/10.1016/j.cortex.2019.07.002, PMID: 31394366, PMCID: PMC6854548 Thierry, G., & Price, C. J. (2006). Dissociating verbal and nonverbal conceptual processing in the human brain. Journal of Cognitive Neuroscience, 18(6), 1018–1028. DOI: https://doi.org/10.1162 /jocn.2006.18.6.1018, PMID: 16839307 Thompson, S. A., Patterson, K., & Hodges, J. R. (2003). Left/right asymmetry of atrophy in semantic dementia: Behavioral-cognitive implications. Neurology, 61(9), 1196–1203. DOI: https://doi.org /10.1212/01.wnl.0000091868.28557.b8, PMID: 14610120 Trueswell, J. C., & Papafragou, A. (2010). Perceiving and remem- bering events cross-linguistically: Evidence from dual-task para- digms. Journal of Memory and Language, 63(1), 64–82. DOI: https://doi.org/10.1016/j.jml.2010.02.006 Turing, A. M. (1950). Computing Machinery and Intelligence. Mind, 59(October), 433–460. DOI: https://doi.org/10.1093 /mind/LIX.236.433 Tyler, L. K., & Moss, H. E. (2001). Towards a distributed account of conceptual knowledge. Trends in Cognitive Sciences, 5(6), 244–252. DOI: https://doi.org/10.1016/S1364-6613(00)01651-X, PMID: 11390295 Vandenberghe, R., Price, C., Wise, R., Josephs, O., & Frackowiak, R. S. (1996). Functional anatomy of a common semantic system for words and pictures. Nature, 383(6597), 254–256. DOI: https://doi.org/10.1038/383254a0, PMID: 8805700 Varley, R. A., Klessinger, N. J. C., Romanowski, C. A. J., & Siegal, M. (2005). Agrammatic but numerate. Proceedings of the National Academy of Sciences of the United States of America, 102(9), 3519–3524. DOI: https://doi.org/10.1073/pnas .0407470102, PMID: 15713804, PMCID: PMC552916 Varley, R. A., & Siegal, M. (2000). Evidence for cognition without grammar from causal reasoning and “theory of mind” in an agrammatic aphasic patient. Current Biology, 10(12), 723–726. DOI: https://doi.org/10.1016/s0960-9822(00)00538-8, PMID: 10873809 Vázquez-Rodríguez, B., Suárez, L. E., Markello, R. D., Shafiei, G., Paquola, C., Hagmann, P., van den Heuvel, M. P., Bernhardt, B. C., Spreng, R. N., & Misic, B. (2019). Gradients of structure– function tethering across neocortex. Proceedings of the National Academy of Sciences, 116(42), 21219–21227. DOI: https://doi.org/10.1073/pnas.1903403116, PMID: 31570622, PMCID: PMC6800358 Visser, M., Jefferies, E., Embleton, K. V., & Lambon Ralph, M. A. (2012). Both the middle temporal gyrus and the ventral anterior temporal area are crucial for multimodal semantic processing: Distortion-corrected fMRI evidence for a double gradient of information convergence in the temporal lobes. Journal of Cognitive Neuroscience, 24(8), 1766–1778. DOI: https://doi.org /10.1162/jocn_a_00244, PMID: 22621260 Võ, M. L.-H., & Wolfe, J. M. (2013). Differential ERP signatures elicited by semantic and syntactic processing in scenes. Psychological Science, 24(9), 1816–1823. DOI: https://doi.org /10.1177/0956797613476955, PMID: 23842954, PMCID: PMC4838599 Vygotsky, L. S. (2012). Thought and language. MIT Press. (Original work published 1934) Wagner, A. D., Desmond, J. E., Demb, J. B., Glover, G. H., & Gabrieli, J. D. (1997). Semantic repetition priming for verbal and pictorial knowledge: A functional MRI study of left inferior prefrontal cortex. Journal of Cognitive Neuroscience, 9(6), 714–726. DOI: https://doi.org/10.1162/jocn.1997.9.6.714, PMID: 23964594 Wagner, L., & Lakusta, L. (2009). Using language to navigate the infant mind. Perspectives on Psychological Science, 4(2), 177–184. DOI: https://doi.org/10.1111/j.1745-6924.2009.01117.x, PMID: 20161142, PMCID: PMC2731417 Walbrin, J., Downing, P., & Koldewyn, K. (2018). Neural responses to visually observed social interactions. Neuropsychologia, 112, 31–39. DOI: https://doi.org/10.1016/j.neuropsychologia .2018.02.023, PMID: 29476765, PMCID: PMC5899757 Wang, J., Cherkassky, V. L., Yang, Y., Chang, K.-M. K., Vargas, R., Diana, N., & Just, M. A. (2016). Identifying thematic roles from Neurobiology of Language 200 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u n o / l / l a r t i c e - p d f / / / / 2 2 1 7 6 1 8 9 7 4 7 9 n o _ a _ 0 0 0 3 0 p d . / l f b y g u e s t t o n 0 7 S e p e m b e r 2 0 2 3 The language network and event semantics neural representations measured by functional magnetic reso- nance imaging. Cognitive Neuropsychology, 33(3–4), 257–264. DOI: https://doi.org/10.1080/02643294.2016.1182480, PMID: 27314175 Wang, X., Wu, W., Ling, Z., Xu, Y., Fang, Y., Wang, X., Binder, J. R., Men, W., Gao, J.-H., & Bi, Y. (2018). Organizational principles of abstract words in the human brain. Cerebral Cortex, 28(12), 4305–4318. DOI: https://doi.org/10.1093/cercor/ bhx283, PMID: 29186345 Watson, J. B. (1920). Is thinking merely action of language mecha- nisms? British Journal of Psychology. General Section, 11(1), 87–104. DOI: https://doi.org/10.1111/j.2044-8295.1920.tb00010.x Wehbe, L., Murphy, B., Talukdar, P., Fyshe, A., Ramdas, A., & Mitchell, T. (2014). Simultaneously uncovering the patterns of brain regions involved in different story reading subprocesses. PLOS ONE, 9(11), e112575. DOI: https://doi.org/10.1371/journal .pone.0112575, PMID: 25426840, PMCID: PMC4245107 West, W. C., & Holcomb, P. J. (2002). Event-related potentials during discourse-level semantic integration of complex pictures. Cognitive Brain Research, 13(3), 363–375. DOI: https://doi.org /10.1016/S0926-6410(01)00129-X, PMID: 11919001 Williams, A., Reddigari, S., & Pylkkänen, L. (2017). Early sensitivity of left perisylvian cortex to relationality in nouns and verbs. Neuropsychologia, 100, 131–143. DOI: https://doi.org/10 .1016/j.neuropsychologia.2017.04.029, PMID: 28450204 Willits, J. A., Amato, M. S., & MacDonald, M. C. (2015). Language knowledge and event knowledge in language use. Cognitive Psychology, 78, 1–27. DOI: https://doi.org/10.1016/j.cogpsych .2015.02.002, PMID: 25791750, PMCID: PMC5951625 Winograd, T. (1976). Artificial intelligence and language compre- hension. US Department of Health, Education, and Welfare, National Institute of Education. Wittgenstein, L. (1961). Tractatus Logico-Philosophicus [Treatise on Logic and Philosophy] (Trans. Pears and McGuinness). Routledge. (Original work published 1922) Woolgar, A., Duncan, J., Manes, F., & Fedorenko, E. (2018). Fluid intelligence is supported by the multiple-demand system not the language system. Nature Human Behaviour, 2(3), 200–204. DOI: https://doi.org/10.1038/s41562-017-0282-3, PMID: 31620646, PMCID: PMC6795543 Wu, D. H., Waller, S., & Chatterjee, A. (2007). The functional neu- roanatomy of thematic role and locative relational knowledge. Journal of Cognitive Neuroscience, 19(9), 1542–1555. DOI: https://doi.org/10.1162/jocn.2007.19.9.1542, PMID: 17714015 Wurm, M. F., & Caramazza, A. (2019). Distinct roles of temporal and frontoparietal cortex in representing actions across vision and language. Nature Communications, 10(1), 1–10. DOI: https://doi.org/10.1038/s41467-018-08084-y, PMID: 30655531, PMCID: PMC6336825 Xu, Y., Wang, X., Wang, X., Men, W., Gao, J.-H., & Bi, Y. (2018). Doctor, teacher, and stethoscope: Neural representation of different types of semantic relations. Journal of Neuroscience, 38(13), 3303–3317. DOI: https://doi.org/10.1523/ JNEUROSCI.2562 -17.2018, PMID: 29476016, PMCID: PMC6596060 Zhu, Z., Bastiaansen, M., Hakun, J. G., Petersson, K. M., Wang, S., & Hagoort, P. (2019). Semantic unification modulates N400 and BOLD signal change in the brain: A simultaneous EEG-fMRI study. Journal of Neurolinguistics, 52, 100855. DOI: https://doi .org/10.1016/j.jneuroling.2019.100855 Zimmerer, V. C., Varley, R. A., Deamer, F., & Hinzen, W. (2019). Factive and counterfactive interpretation of embedded clauses in aphasia and its relationship with lexical, syntactic and general cognitive capacities. Journal of Neurolinguistics, 49, 29–44. DOI: https://doi.org/10.1016/j.jneuroling.2018.08.002 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u n o / l / l a r t i c e - p d f / / / / 2 2 1 7 6 1 8 9 7 4 7 9 n o _ a _ 0 0 0 3 0 p d / . l f b y g u e s t t o n 0 7 S e p e m b e r 2 0 2 3 Neurobiology of Language 201ARTÍCULO DE INVESTIGACIÓN imagen
ARTÍCULO DE INVESTIGACIÓN imagen
ARTÍCULO DE INVESTIGACIÓN imagen
ARTÍCULO DE INVESTIGACIÓN imagen
ARTÍCULO DE INVESTIGACIÓN imagen
ARTÍCULO DE INVESTIGACIÓN imagen

Descargar PDF