Decoding Brain Activity Associated with Literal and Metaphoric Sentence
Comprehension Using Distributional Semantic Models
Vesna G. Djokic†
Jean Maillard‡ Luana Bulat‡ Ekaterina Shutova†
†ILLC, University of Amsterdam, 荷兰人
‡Dept. of Computer Science & 技术, University of Cambridge, 英国
vesna@imsquared.eu, jean@maillard.it,
ltf24@cam.ac.uk, e.shutova@uva.nl
抽象的
Recent years have seen a growing interest
within the natural language processing (自然语言处理)
community in evaluating the ability of seman-
tic models to capture human meaning represen-
tation in the brain. Existing research has mainly
focused on applying semantic models to de-
code brain activity patterns associated with
the meaning of individual words, 和, 更多的
最近, this approach has been extended to
sentences and larger text fragments. Our work
is the first to investigate metaphor process-
ing in the brain in this context. We evaluate
a range of semantic models (word embed-
丁斯, compositional, and visual models) 在
their ability to decode brain activity associated
with reading of both literal and metaphoric
句子. Our results suggest that composi-
tional models and word embeddings are able
to capture differences in the processing of lit-
eral and metaphoric sentences, providing sup-
port for the idea that the literal meaning is
not fully accessible during familiar metaphor
comprehension.
1 介绍
Distributional semantics aims to represent the mean-
ing of linguistic fragments as high-dimensional
dense vectors. It has been successfully used to
model the meaning of individual words in seman-
tic similarity and analogy tasks (Mikolov et al.,
2013; Pennington et al., 2014); as well as the
meaning of larger linguistic units in a variety of
任务, such as translation (Bahdanau et al., 2014)
and natural language inference (Bowman et al.,
2015). Recent research has also demonstrated the
231
ability of distributional models to predict patterns
of brain activity associated with the meaning of
字, obtained via functional magnetic resonance
成像 (功能磁共振成像) (米切尔等人。, 2008; Devereux
等人。, 2010; Pereira et al., 2013). Following in their
脚步, Anderson et al. (2017乙) have investigated
visually grounded semantic models in this context.
They found that while both visual and text-based
models can equally decode concrete words, 文本-
based models show an overall advantage over vi-
sual models when decoding more abstract words.
Other research has shown that data-driven
semantic models can also successfully predict pat-
terns of brain activity associated with the pro-
cessing of sentences (Pereira et al., 2018) 和
larger narrative text passages (Wehbe et al., 2014;
Huth et al., 2016). 最近, Jain and Huth (2018)
investigated long short-term memory (LSTM)
recurrent neural networks and showed that seman-
tic models that incorporate larger-sized context
windows outperform those with smaller-sized
context windows, as well as the baseline bag-
of-words model,
in predicting brain activity
associated with narrative listening. 这表明
that compositional semantic models are suffi-
ciently advanced to study the impact of linguistic
context on semantic representation in the brain. 在
这篇论文, we investigate the extent to which lex-
ical and compositional semantic models are able
to capture differences in human meaning represen-
tations, resulting from meaning disambiguation of
literal and metaphoric uses of words in context.
Metaphoric uses of words involve a transfer
of meaning, arising through semantic composition
(Mohammad et al., 2016). 例如, 平均值-
ing of the verb push is not intrinsically metaphor-
伊卡尔; yet it receives a metaphorical interpretation
计算语言学协会会刊, 卷. 8, PP. 231–246, 2020. https://doi.org/10.1162/tacl 00307
动作编辑器: Walter Daelemans. 提交批次: 11/2019; 已发表 4/2020.
C(西德:13) 2020 计算语言学协会. 根据 CC-BY 分发 4.0 执照.
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
t
A
C
我
/
我
A
r
t
我
C
e
–
p
d
F
/
d
哦
我
/
.
1
0
1
1
6
2
/
t
我
A
C
_
A
_
0
0
3
0
7
1
9
2
3
1
8
0
/
/
t
我
A
C
_
A
_
0
0
3
0
7
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
7
S
e
p
e
米
乙
e
r
2
0
2
3
when we talk about pushing agendas, push-
ing drugs, or pushing ourselves. Theories of
metaphor comprehension differ in terms of the
kinds of processes (and stages) involved in ar-
riving at the metaphorical interpretation, mainly
whether or not the abstract meaning is indirectly
accessed via processing the literal meaning first
or directly accessible largely bypassing the lit-
eral meaning (Bambini et al., 2016). To this ex-
帐篷, the role that access to and retrieval of the
literal meaning plays during metaphor processing
is often debated. 一方面, metaphor com-
prehension involves juxtaposing two unlike things
and this may invite a search for common relational
structure through a process of direct comparison.
Inferences flow from the vehicle to the topic giv-
ing rise to the metaphoric interpretation (Gentner
and Bowdle, 2005). In a slightly different vein,
Lakoff (1980) suggest that metaphor comprehen-
sion involves systematic mappings (between a
concrete domain onto another typically more ab-
stract domain) that become established through
co-occurrences over the course of experience.
This draws on mental imagery or the re-activation
of neural representations involved during pri-
mary experience (IE。, sensorimotor simulation)
allowing appropriate inferences to be made. 其他
理论, 然而, suggest that the literal mean-
ing in metaphor comprehension may be largely
bypassed if the abstract meaning is directly or
immediately accessible involving more categori-
cal processing (Glucksberg, 2003). 例如,
the word used metaphorically could be imme-
diately recognized as belonging to an abstract
superordinate category of which both the vehi-
cle and topic belong. 或者, it has been
suggested that more familiar metaphors involve
categorical processing, while comparatively novel
metaphor will involve initially greater processing
of the literal meaning (Desai et al., 2011).
To contribute to our understanding of metaphor
comprehension, including the accessibility of the
literal meaning, we investigate whether semantic
models are able to decode patterns of brain activ-
ity associated with literal and metaphoric sentence
comprehension, using the fMRI dataset of Djokic
等人. (即将推出). This dataset contains neural
activity associated with the processing of both
literal and familiar metaphorical uses of hand-
action verbs (such as push, grasp, squeeze, ETC。)
in the context of their nominal object. We exper-
iment with several kinds of semantic models: (1)
word-based models, 即, word embeddings of
the verb and the nominal object; (2) compositional
型号, 即, vector addition and an LSTM
recurrent neural network; 和 (3) visual models,
learning visual representations of the verb and
its nominal object. This choice of models allows
us to investigate: (1) the role of the verb and
its nominal object (captured by their respective
word embeddings) in the interpretation of literal
and metaphoric sentences; (2) the extent to which
compositional models capture the patterns of hu-
man meaning representation in case of literal and
metaphoric use; 和 (3) the role of visual infor-
mation in literal and metaphor interpretation. 我们
test these models in their ability to decode brain
activity associated with literal and metaphoric
句子理解, using the similarity de-
coding method of Anderson et al. (2016). We per-
form decoding at the whole brain level, 还有
as within specific regions implicated in linguistic,
motor and visual processing.
Our results demonstrate that several of our se-
mantic models are able to predict patterns of brain
activity associated with the meaning of literal and
metaphorical sentences. We find that (1) com-
positional semantic models are superior in decod-
ing both literal and metaphorical sentences as
compared to the lexical (IE。, word-based) 模组-
这; (2) semantic representations of the verb are
superior compared to that of the nominal object in
decoding literal phrases, whereas semantic repre-
sentations of the object are superior to that of the
verb in decoding metaphorical phrases; 和 (3) 林-
guistic models capture both language-related and
sensorimotor representations for literal sentences—
相比之下, for metaphoric sentences, 语言学的
models capture language-related representations
and the visual models captured sensorimotor rep-
resentations in the brain. Although the results do
not offer straightforward conclusions regarding
the role of the literal meaning in metaphor compre-
hension, they provide some support to the idea that
lexical-semantic relations associated with the lit-
eral meaning are not fully accessible during famil-
iar metaphor comprehension, particularly within
action-related brain regions.
2 相关工作
2.1 Decoding Brain Activity
Mitchell et al. (2008) were the first to show that
distributional representations of concrete nouns
232
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
t
A
C
我
/
我
A
r
t
我
C
e
–
p
d
F
/
d
哦
我
/
.
1
0
1
1
6
2
/
t
我
A
C
_
A
_
0
0
3
0
7
1
9
2
3
1
8
0
/
/
t
我
A
C
_
A
_
0
0
3
0
7
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
7
S
e
p
e
米
乙
e
r
2
0
2
3
built from co-occurrence counts with 25 experi-
ential verbs could predict brain activity elicited by
these nouns. Later studies used the fMRI data of
Mitchell et al. (2008) as a benchmark for testing a
range of semantic models including topic model-
based semantic features learned from Wikipedia
(Pereira et al., 2013), feature-norm based semantic
特征 (Devereux et al., 2010), and skip-gram
word embeddings (Bulat et al., 2017). 安德森
等人. (2013) demonstrate that visually grounded
semantic models can also decode brain activity
associated with concrete words and show the best
results using multimodal models. 此外,
Anderson et al. (2015) show that text-based mod-
els are superior in predicting brain activity of
concrete words in brain areas related to linguis-
tic processing, and the visual models in those re-
lated to visual processing. 最后, Anderson et al.
(2017乙) use image and text-based semantic models
to decode an fMRI dataset containing nouns with
varying degree of concreteness. They show that
text-based models have an advantage decoding
the more abstract words over the visual models,
supporting the view that concrete concepts involve
linguistic and visual codes, while abstract concepts
mainly linguistic codes (Paivio, 1971).
Subsequent studies have focused on evaluating
the ability of distributional semantic models to
encode brain activity elicited by larger
文本
fragments. Pereira et al. (2018) showed that a
regression model trained to map between word
embeddings and the fMRI patterns of words could
predict neural representations for unseen sen-
时态. Adding to this, both Wehbe et al. (2014)
and Huth et al. (2016) showed that distributional
semantic models could predict neural activity
associated with narrative comprehension. 对于在-
姿态, Wehbe et al. (2014) showed that a re-
gression model that learned a mapping between
several story features (distributional semantics,
syntax, and discourse-related) and fMRI patterns
associated with narrative reading could distinguish
between two stories. These findings suggest that
encoding models using word embeddings as fea-
tures can predict brain activity associated with
larger linguistic units. Other researchers have
evaluated models that more directly consider the
role played by the linguistic context and syntax
(Anderson et al., 2019; Jain and Huth, 2018).
Jain and Huth (2018) showed that a regression-
based model mapping between fMRI patterns
associated with narrative listening and contextual
features obtained from an LSTM language model
outperformed the bag-of-words model. 而且,
the performance increased when using LSTMs
with larger context-windows.
In parallel to this work, several other works have
been successful in decoding word-level and sen-
tential meanings using semantic models based on
human behavioral data. Chang et al. (2010) 使用
taxonomic encodings of McRae et al. (2005),
while Fernandino et al. (2015) use semantic mod-
els based on human-elicited salience scores for
sensorimotor attributes to decode neural activity
associated with concrete concepts. 有趣的是,
the latter report that their model is unable to
decode brain activity associated with the meaning
of more abstract concepts. 最后, other research
has achieved similar success in decoding sen-
tential meanings using neuro-cognitively driven
features that more closely reflect human experi-
恩斯 (Anderson et al., 2017A; 王等人。, 2017;
Just et al., 2017). 例如, Anderson et al.
(2017A) showed that a multiple-regression model
trained to map between 65-dimensional experien-
tial attribute model of word meaning (例如, motor,
空间的, social-emotional) and the fMRI activa-
tions associated with words could predict neural
activation of unseen sentences. These findings
highlight the importance of considering the neu-
rocognitive constraints on semantic representation
in the brain.
2.2 Semantic Representation in the Brain
Semantic processing is thought to depend on a
number of brain regions functioning in concert
as a unified semantic network linking language,
记忆, and modality-specific systems in the
脑 (Binder et al., 2009). 徐等. (2016) provide
evidence in support of at least three functionally
segregated systems that together comprise such
a semantic network. A left-lateralized language-
based system spanning frontal-temporal (例如, 左边
inferior frontal gyrus [LIFG], left posterior mid-
dle temporal gyrus [LMTP]), but also parietal
地区, is associated with lexical-semantics and
syntactic processing. It preferentially responds to
language tasks when compared to non-linguistic
tasks of similar complexity (Fedorenko et al.,
2011). 尤其, both Devereux et al. (2014) 和
Anderson et al. (2015) found that linguistic models
could decode concrete concepts within brain areas
在这个系统中, mainly the LMTP. 重要的,
233
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
t
A
C
我
/
我
A
r
t
我
C
e
–
p
d
F
/
d
哦
我
/
.
1
0
1
1
6
2
/
t
我
A
C
_
A
_
0
0
3
0
7
1
9
2
3
1
8
0
/
/
t
我
A
C
_
A
_
0
0
3
0
7
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
7
S
e
p
e
米
乙
e
r
2
0
2
3
this system works in tandem with a memory-
based simulation system that interacts directly
with medial-temporal areas critical in memory
(and multimodal) 加工. The memory-based
simulation system retrieves memory images rele-
vant to a concept and includes occipital areas such
as the superior lateral occipital cortex, 受到牵连
in visual processing and which Anderson et al.
(2015) showed could decode concrete concepts
with visual models. This system also recruits
modality-specific information. In line with this,
Carota et al. (2017) showed that the semantic sim-
ilarity of text-based models correlates with fMRI
patterns of action words not only in language-
related areas, but also in motor areas (left precen-
tral gyrus [LPG], left premotor cortex [LPM]).
最后, a fronto-parietal semantic control system
manages interactions between these two systems,
such as directing attention to different aspects of
meaning depending on the linguistic context.
Prior neuroimaging experiments show that
concrete concepts activate the relevant modality-
specific systems in the brain (巴尔萨卢, 2008, 2009),
while the processing of abstract concepts has been
found to engage mainly language-related brain re-
gions in the left hemisphere and areas implicated in
cognitive control (Binder et al., 2005; Sabsevitz et al.,
2005). 相关地, action-related words and literal
phrases activate motor
to ac-
cess motoric features of verbs) (Pulvermuller, 2005;
Kemmerer et al., 2008). 相比之下, the degree
to which action-related metaphors engage motor
brain regions appears to depend on novelty, 和
more familiar metaphors (Desai et al., 2011) 展示-
ing little to no activity in motor areas. 总共,
concrete language involves modality-specific and
language-related brain regions, while abstract
language mainly language areas (Hoffman et al.,
2015).
地区 (例如,
To assess the role of linguistic versus visual
information in literal and metaphor decoding, 我们
investigated the extent to which our semantic
models were able to decode literal and metaphoric
sentences not only across the whole brain (和
brain’s lobes), but also within specific brain
regions of interest (ROIs) implicated in visual,
行动, and language processing. The visual ROIs
include high-level visual brain regions (left lateral
occipital temporal cortex, left ventral temporal
cortex), part of the ventral visual stream implicated
in object recognition (Bugatus et al., 2017).
The action ROIs include sensorimotor brain re-
祇翁 (LPG, LPM) implicated in action-semantics
(Kemmerer et al., 2008). 最后, the language-
related ROIs include areas of the classic language
网络 (LIFG, LMTP) implicated in lexico-
semantic and syntactic processing (Foderenko et al.,
2012).
We expect to find that lexical and compositional
semantic models can capture differences in the
processing of literal and metaphoric language in
大脑. In line with the idea that literal language
co-occurs more directly with our everyday percep-
tual experience, we expect that visual models will
show an overall advantage in literal but perhaps
not metaphor decoding across the whole brain
(particularly within Occipital and Temporal lobes)
and in visual (行动) ROIs compared to language-
related ROIs. 相比之下, for metaphor decoding
we expect that linguistic models will mainly show
an advantage in language-related ROIs compared
with visual (and action) ROIs due to their more
abstract nature. 最后, we expect compositional
models to be superior to lexical models in met-
aphor decoding, which relies on semantic com-
position for meaning disambiguation in context.
This allows investigating whether metaphor com-
prehension involves lingering access to the literal
meaning including more grounded visual and
sensorimotor representations.
3 Brain Imaging Data
Stimuli consisted of sentences divided into five
main conditions: 40 affirmative literal, 40 negated
文字, 40 affirmative metaphor, 40 negated meta-
phor, 和 40 affirmative literal paraphrases of the
隐喻.
Stimuli Stimuli consisted of sentences divided
into five main conditions: 40 affirmative literal,
40 negated literal, 40 affirmative metaphor, 40
negated metaphor, 和 40 affirmative literal para-
phrases of the metaphor (used as control). A total
的 31 unique hand- action verbs were used (9 动词
were re-used twice per condition). For each verb,
the authors created four conditions: affirmative
文字, affirmative metaphoric, negated literal, 和
negated metaphoric. All sentences were in the
third person singular, present tense, 进步,
见图 1. Stimuli were created by the au-
thors and normed for psycholinguistic variables
familiarity, concreteness) by an
(IE。,
length,
234
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
t
A
C
我
/
我
A
r
t
我
C
e
–
p
d
F
/
d
哦
我
/
.
1
0
1
1
6
2
/
t
我
A
C
_
A
_
0
0
3
0
7
1
9
2
3
1
8
0
/
/
t
我
A
C
_
A
_
0
0
3
0
7
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
7
S
e
p
e
米
乙
e
r
2
0
2
3
数字 1: Sample stimuli for the verb push.
independent set of participants in a behavioral
实验.
Participants Fifteen adults (8 女性, 年龄 18
到 35) were involved in the fMRI study. 全部
participants were right-handed, native English
speakers.
Experimental Paradigm Participants were in-
structed to passively read the object of the sentence
(例如, ‘‘the yellow lemon’’), briefly shown on
screen first, followed by the sentence (例如, ‘‘She’s
squeezing the lemon’’). The object was shown on
screen for 2 s, followed by a 0.5 s interval, 然后
the sentence was presented for 4 s followed by
a rest of 8 s. A total of 5 runs were completed,
each lasting 10.15 minutes (3 participants only
completed 4 runs). Stimulus presentation was
randomized across participants.
fMRI data acquisition fMRI images were ac-
quired with a Siemens MAGNETOM Trio 3T
System with a 32-channel head matrix coil. 高的-
resolution anatomical scans were acquired with
a structural T1-weighted magnetization prepared
rapid gradient echo (MPRAGE) with TR =
1950 多发性硬化症, = 2.26 多发性硬化症, flip angle 10%, 256 ×
256 mm matrix, 1 mm resolution, 和 208 coro-
nal slices. Whole brain functional images were
obtained with a T2* weighted single-shot gradient-
recalled echoplanar imaging, echo-planar sequence
(EPI) using blood oxygenation-level-dependent
contrast with TR = 2000 多发性硬化症, = 30 多发性硬化症, flip
角度 90 degrees, 64×64 mm matrix, 3.5 毫米
resolution. Each functional image consisted of 37
contiguous axial slices, acquired in interleaved
mode.
4 Semantic Models
4.1 Linguistic Models
All our linguistic models are based on GloVe
(Pennington et al., 2014) 100-dimensional (dim)
word vectors provided by the authors, trained
on Wikipedia and the Gigaword corpus.1 We
investigate the following semantic models:
Individual Word Vectors
In this model, stim-
ulus phrases are represented as the individual D-
dim word embeddings for their verb and direct
目的. We will refer to these models as VERB and
OBJECT, 分别.
Concatenation We then experiment with mod-
elling phrase meanings as the 2D-dim concat-
enation of their verb and direct object embeddings
(VERBOBJECT).
Addition This model
takes the embeddings
w1, . . . , wn for the words of the stimulus phrase,
and computes the stimulus phrase representation
as their average: h = 1
n P
n
i=1 wi.
LSTM As a more sophisticated compositional
模型, we take the LSTM recurrent neural net-
work architecture of Hochreiter and Schmidhuber
(1997). We trained the LSTM on a natural lan-
guage inference task (Bowman et al., 2015), 作为
it is a complex semantic task where we expect
rich meaning representations to play an important
角色. Given two sentences, the goal of natural
language inference is to decide whether the first
entails or contradicts the second, or whether they
are unrelated. We used the LSTM to compute
compositional representations for each sentence,
and then used a single-layer perceptron classifier
(Bowman et al., 2016) to predict the correct re-
关系. The inputs to the LSTM were the same
100-dim GloVe embeddings used for the other
型号, and were updated during training. The model
was optimized using Adam (Kingma and Ba, 2014).
We extracted 100-dim vector representations from
the hidden state of the LSTM for the verb-object
phrases in our stimulus set.
4.2 Visually Grounded Models
We use the MMfeat toolkit (Kiela, 2016) to obtain
visual representations in line with Anderson et al.
(2017乙). We retrieved 10 images for each word
or phrase in our dataset using Google Image
搜索. We then extracted an embedding for each
of the images from a deep convolutional neural
network that was trained on the ImageNet classi-
fication task (Russakovsky et al., 2015). We used
an architecture consisting of five convolutional
1https://nlp.stanford.edu/projects/
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
t
A
C
我
/
我
A
r
t
我
C
e
–
p
d
F
/
d
哦
我
/
.
1
0
1
1
6
2
/
t
我
A
C
_
A
_
0
0
3
0
7
1
9
2
3
1
8
0
/
/
t
我
A
C
_
A
_
0
0
3
0
7
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
7
S
e
p
e
米
乙
e
r
2
0
2
3
glove/.
235
layers, followed by two fully connected rectified
linear unit layers and a softmax layer for clas-
sification. To obtain an embedding for a given
image we performed a forward pass through
the network and extracted the 4096-dim fully
connected layer that precedes the softmax layer.
The visual representation of a word or a phrase is
computed as the mean of its 10 individual image
陈述.
我们
experiment with word-based models
(VISUAL VERB and VISUAL OBJECT) and the follow-
ing three visual compositional models:
Concatenation This model represents the stim-
ulus phrase as the concatenation of the two D-dim
visual representations for the verb and the object
(VISUAL VERBOBJECT).
Addition We take the average of the visual
representations for the verb and object to give the
representation for the phrase (VISUAL ADDITION).
Phrase We obtain visual representations for the
短语, by querying Google images for the verb-
object phrase directly (VISUAL PHRASE).
5 Decoding Brain Activity
5.1 fMRI Data Processing
For our experiments we limited analysis to the 12
individuals who completed all runs. The runs were
combined across time to form each participant’s
dataset and preprocessed (high-pass filtered, motion-
corrected, linearly detrended) with FSL.2
General Linear Modeling After fMRI prepro-
cessing, we selected sentences within the affir-
mative literal and affirmative metaphoric conditions
representative of the 31 unique verbs as conditions
of interest for all our experiments. We fit a model
of the hemodynamic response function to each
stimulus presentation using a univariate general
linear model with PyMVPA.3 The entire stimulus
presentation was modeled as an event lasting 6 s
after taking into account the hemodynamic lag of
4 s. The model parameters (Beta weights) 是
normalized to Z-scores. Each stimulus presen-
tation was then represented as a single volume
containing voxel-wise Z-score maps for each
的 31 affirmative literal and 31 metaphoric
句子. The affirmative literal or metaphoric
2https://fsl.fmrib.ox.ac.uk/fsl/fslwiki.
3http://www.pymvpa.org/.
neural estimates were then used to perform
similarity-based decoding, separately.
Voxel Selection We performed feature selection
by selecting the top 35% of voxels that showed the
highest sensitivity (F-statistics) using a univari-
ate ANOVA as a feature-wise measure with two
团体: 这 31 affirmative literal sentences versus
31 affirmative metaphoric sentences. F-statistics
were computed for each feature as the standard
fraction of between and within group variances
using PyMVPA. This selected voxels sensitive to
univariate activation differences between literal
and metaphoric categories.
5.2 Defining Regions of Interest
Following Anderson et al. (2013), we performed
decoding at the whole-brain level and across four
gross anatomical divisions: the frontal, 颞,
枕骨, and parietal lobes. The masks were cre-
ated using the Montreal Neurological Institute
(MNI) Structural Atlas in FSL. We also defined
the following a priori ROIs to compare the perfor-
mance of literal and metaphoric decoding in visual
and sensorimotor brain regions vs. language-related
brain areas implicated in lexical-semantic process-
英: (1) visual ROIs (left lateral occipital temporal
temporal cortex
left ventral
cortex [LLOCT],
(LVT)); (2) action ROIs (LPG, LPM); (3) 语言-
related ROIs (LMTP, LIFG). The LLOTC and
LVT were created manually in FSL using the
anatomical landmarks of Bugatus et al. (2017).
The LPG and LPM were created using the Juelich
Histological Atlas thresholded at 25% in FSL. 这
LMTP and LIFG were created using the Harvard-
Oxford Cortical Probabilistic Atlas thresholded at
25% in FSL. Masks were transformed from MNI
standard space into the participant’s functional
空间.
5.3 Similarity-Based Decoding
We use similarity-based decoding (Anderson et al.,
2016) to evaluate to what extent the represen-
tations produced by our semantic models are
able to decode brain activity patterns associated
with our stimuli. We first compute two similarity
matrices (k stimuli × k stimuli), containing sim-
ilarities between all stimulus phrases in the data-
放: the model similarity matrix (where similarities
are computed using the semantic model vectors)
and the brain similarity matrix (where similarities
are computed using the brain activity vectors). 这
236
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
t
A
C
我
/
我
A
r
t
我
C
e
–
p
d
F
/
d
哦
我
/
.
1
0
1
1
6
2
/
t
我
A
C
_
A
_
0
0
3
0
7
1
9
2
3
1
8
0
/
/
t
我
A
C
_
A
_
0
0
3
0
7
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
7
S
e
p
e
米
乙
e
r
2
0
2
3
MODELS
OBJECT
VERB
VERBOBJECT
ADDITION
LSTM
VISUAL OBJECT
VISUAL VERB
VISUAL VERBOBJECT
VISUAL ADDITION
VISUAL PHRASE
Frontal
Parietal
Temporal
Occipital
Whole-Brain
0.55
0.67
0.57
0.72*
0.58
0.60
0.41
0.49
0.58
0.56
0.53
0.69*
0.57
0.72*
0.50
0.51
0.44
0.49
0.48
0.52
0.50
0.62
0.51
0.69*
0.55
0.70*
0.59
0.61
0.68*
0.44
0.40
0.65
0.55
0.70*
0.56
0.69*
0.66
0.63
0.65
0.42
0.50
0.63
0.52
0.69*
0.53
0.66
0.50
0.55
0.57
0.45
桌子 1: Literal Decoding. Leave-2-out decoding accuracies, significant values (p < 0.05) surviving
FDR correction for multiple comparisons indicated in bold by an asterisk (*).
similarities were computed using Pearson correla-
tion coefficient as a measure. We then perform the
decoding using a leave-two-out decoding scheme
in this similarity space. Specifically, from the set
of all possible pairs of stimuli (the number of
possible pairs for k = 31 stimuli
is 465), a
single pair is selected at a time. Model similarity-
codes are obtained for each stimulus in the
pair by extracting the relevant column vectors
for those stimuli
from the model similarity-
matrix. In the same way, neural similarity-codes
are extracted from the neural similarity-matrix.
Correlations with the stimuli pairs themselves
are removed to not bias decoding. The model
similarity-codes of the two held-out stimuli are
correlated with their respective neural similarity-
codes. If the correct labeling scheme produces
a higher sum of correlation coefficients than the
incorrect labeling scheme, this is counted as a
correct classification, and otherwise as incor-
rect. When this procedure is completed for all
possible held-out pairs, the number of correct
classifications over the total number of possible
pairs yields a decoding accuracy. We perform
group-level similarity-decoding by first averaging
the neural similarity-codes across participants to
yield group-level neural similarity-codes across
participants to yield group-level neural similarity-
codes equivalent to a fixed-effects analysis as in
Anderson et al. (2016). The group-level neural
similarity-codes and model similarity-codes are
then used to perform leave-two-out decoding as
described above.
5.4 Statistical Significance
Statistical significance was carried out as in
Anderson et al. (2016) using a non-parametric
permutation test. The null-hypothesis is that there
is no correspondence between the model-based
similarity-codes and the group-level neural similarity
codes. The null-distribution was estimated using a
permutation scheme. We randomly shuffled the
rows and columns of the model-based similarity
matrix, leaving the neural similarity-matrix fixed.
Following each permutation (n = 10,000), we per-
form group-level similarity-decoding obtaining
10,000 decoding accuracies we would expect by
chance using random labeling. The probability (p-
value) of obtaining a decoding accuracy under the
null-distribution is then at least as large as the ob-
served accuracy score. We correct for the number of
statistical tests performed using False-Discovery-
Rate (FDR) with a corrected error probability
threshold of p = 0.05 (Benjamini and Hochberg,
1995).
6 Experiments and Results
We use group-level similarity-decoding to decode
brain activity associated with literal and metaphoric
sentences using each of our semantic models.
We perform decoding at the sentence level for
literal and metaphor conditions (affirmative only),
separately. Decoding was performed at the whole-
brain level and across the brain’s lobes, as well as
within a priori defined ROIs implicated in visual,
action and language-related processing.
6.1 Linguistic Models
Literal sentences When decoding literal sen-
tences with linguistic models across the brain’s
lobes we found significant decoding accuracies
surviving FDR correction for multiple testing for
the ADDITION and VERB models, see Table 1. A
237
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
/
t
a
c
l
/
l
a
r
t
i
c
e
-
p
d
f
/
d
o
i
/
.
1
0
1
1
6
2
/
t
l
a
c
_
a
_
0
0
3
0
7
1
9
2
3
1
8
0
/
/
t
l
a
c
_
a
_
0
0
3
0
7
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
MODELS
OBJECT
VERB
VERBOBJECT
ADDITION
LSTM
VISUAL OBJECT
VISUAL VERB
VISUAL VERBOBJECT
VISUAL ADDITION
VISUAL PHRASE
Frontal
Parietal
Temporal
Occipital
Whole-Brain
0.61
0.53
0.68*
0.71*
0.58
0.59
0.70*
0.67
0.59
0.70*
0.62
0.45
0.53
0.63
0.51
0.59
0.66
0.64
0.62
0.63
0.68*
0.62
0.72*
0.77*
0.62
0.60
0.67
0.64
0.60
0.63
0.52
0.41
0.67
0.75*
0.53
0.42
0.58
0.49
0.57
0.67
0.58
0.49
0.60
0.71*
0.55
0.56
0.63
0.62
0.58
0.66
Table 2: Metaphor Decoding. Leave-2-out decoding accuracies, significant values (p < 0.05) surviving
FDR correction for multiple comparisons are indicated in bold by an asterisk (*).
two-way ANOVA without replication showed a
main effect for model F(4,16) = 38.22, p <
0.001 but not brain lobe. Post-hoc two-tailed
t-tests surviving correction for multiple testing
confirmed a significant advantage for the ADDITION
model over other models. Lastly, the VERB model
showed a significant decoding advantage over
all other models except the ADDITION model. A
post-hoc unpaired t-test confirmed a significant
advantage for the VERB model in literal versus
metaphor decoding (t = 3.97, p < 0.01, df =
8). The results suggest that the ADDITION and
VERB models are superior compared to other
models in decoding literal sentences. Furthermore,
they suggest that the VERB model more closely
captures the variance associated with the literal
compared to metaphoric category.
for
Metaphoric Sentences When decoding meta-
phor with linguistic models we found signifi-
cant decoding accuracies
the ADDITION,
VERBOBJECT, and OBJECT models, mainly in the
Temporal lobe, see Table 2. A two-way ANOVA
showed a main effect of model F(4,16) = 18.77 ,
p < 0.001, and brain lobe F(4,16) = 7.58, p <
0.01. Post-hoc t-tests showed a significant advan-
tage for the ADDITION model over other models.
We also found that the VERBOBJECT model signif-
icantly outperformed the LSTM (t = 3.89,
p < 0.05, df = 4) and VERB (t = 4.36, p <
0.05, df = 4) models, while the OBJECT model also
outperformed the VERB model (t = 5.42, p < 0.01,
df = 4). Thus, models that incorporate the object
directly (i.e., OBJECT, VERBOBJECT), outperform
the VERB model. A post-hoc unpaired t-test con-
firmed that the performance of the OBJECT model
was higher in metaphor versus literal decoding
(t = 2.88, p < 0.05, df = 8). The results sug-
gest that the ADDITION, VERBOBJECT, and OBJECT
models are superior compared with other models
in decoding metaphoric sentences and, further-
more, that the OBJECT model more closely captures
the variance associated with the metaphor versus
literal category.
Lastly, additional post-hoc t-tests showed that
the Temporal
lobe significantly outperformed
other lobes (except the Occipital lobe) across the
models. This suggests an advantage for linguistic
models in temporal areas, possibly pointing to an
increased dependence on memory and language
processing associated with medial and lateral
temporal areas, respectively.
6.2 Visual Models
Literal Sentences When decoding literal sen-
tences with visual models, we found significant
decoding accuracies for the VISUAL OBJECT and
VISUAL ADDITION models, mainly in Occipital and
Temporal lobes, see Table 1. A two-way ANOVA
showed a main effect of model, but this did not
survive multiple-testing correction. The results
that visual models can decode brain
suggest
activity associated with concrete concepts only in
occipital-temporal areas, part of the ventral visual
stream, possibly pointing to increased reliance on
these areas for object recognition, but see ROI
analysis in section 6.4.
Metaphoric sentences When decoding meta-
phoric sentences with visual models in the brain,
we found significant decoding accuracies for both
the VISUAL VERB and VISUAL PHRASE model in the
238
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
/
t
a
c
l
/
l
a
r
t
i
c
e
-
p
d
f
/
d
o
i
/
.
1
0
1
1
6
2
/
t
l
a
c
_
a
_
0
0
3
0
7
1
9
2
3
1
8
0
/
/
t
l
a
c
_
a
_
0
0
3
0
7
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
MODELS
OBJECT
VERB
VERBOBJECT
ADDITION
LSTM
VISUAL OBJECT
VISUAL VERB
VISUAL VERBOBJECT
VISUAL ADDITION
VISUAL PHRASE
Visual
Action-related
Language-related
LLOCT
LVT
LPG
LPM
LMTP
LIFG
0.59
0.73*
0.63
0.74*
0.53
0.68*
0.55
0.56
0.48
0.40
0.59
0.72*
0.61
0.76*
0.62
0.71*
0.62
0.62
0.69*
0.40
0.52
0.68*
0.59
0.74*
0.58
0.50
0.43
0.47
0.56
0.41
0.62
0.77*
0.71*
0.78*
0.56
0.42
0.48
0.45
0.42
0.54
0.45
0.69*
0.47
0.69*
0.68*
0.67*
0.49
0.51
0.65
0.44
0.47
0.60
0.51
0.73*
0.61
0.57
0.40
0.41
0.49
0.44
Table 3: Region of Interest: Literal Decoding. Leave-2-out decoding accuracies, significant values (p > 0.05)
surviving FDR correction for multiple comparisons for the ROIs indicated in bold by an asterisk (*).
Frontal lobe, 见表 2. A two-way ANOVA
showed a main effect of model F(4,16) = 6.12,
p < 0.01 and brain lobe F(4,16) = 5.21, p <
0.01. Post-hoc t-tests showed that both the VISUAL
VERB (t = 5.40, p < 0.01, df = 4) and VISUAL
VERBOBJECT (t = 8.49, p < 0.01, df = 4) models
outperformed the VISUAL OBJECT model across the
lobes. This suggests that visual information about
the verb is more relevant to metaphor decod-
ing than that of the object. Relatedly, when com-
paring the performance of visual and linguistic
models across the lobes, we found that the OBJECT
model significantly outperformed the VISUAL OBJECT
model across the lobes, surviving correction for
multiple comparisons. In sum, these results sug-
gest that visual information corresponds more
strongly to the concrete verb, whereas linguistic
information corresponds more strongly with the
abstract object in metaphor decoding, but see ROI
analysis section 6.3. We found a main effect of
brain lobe that did not survive multiple-testing
correction.
6.3 Region of Interest (ROI) analysis
Literal Sentences When comparing the perfor-
mance of linguistic models across the ROIs, we
found that the performance of linguistic models
within language-related ROIs was on par with that
within vision and action ROIs, see Table 3. This
suggests that the linguistic models may be captur-
ing sensorimotor and visual representations in the
brain during literal sentence processing. Adding
to this, we observed that linguistic models sig-
nificantly outperformed visual models in action
ROIs (t = 6.83, p < 0.001, df = 9), suggesting
that the linguistic models are more closely able to
capture the motoric features and action semantics
relevant to literal sentence processing when com-
pared even to the more visually grounded models.
The results suggest that the visual models may
correlate with information in action-related brain
regions (e.g., sensorimotor representations). In
sum, the results suggest that literal sentence pro-
cessing involves both language-related and per-
ceptual/sensorimotor representations (relevant to
action semantics) in the brain.
Metaphoric Sentences When comparing the
performance of linguistic models across the ROIs
(see Table 4), we observed that linguistic models
were superior in decoding metaphoric sentences
in language-related ROIs compared to visual (t =
3.11, p < 0.05, df = 9) and action ROIs (t =
2.97, p < 0.05, df = 9). This suggests that lin-
guistic models mainly capture language-related
representations in the brain during metaphor pro-
cessing. Interestingly, we did observe that the
visual models significantly outperformed the
linguistic models in action related ROIs (t =
3.91, p < 0.01, df = 9) for metaphor decoding.
Relatedly, we also observed that visual models
were superior in decoding metaphoric sentences
in action compared with language-related ROIs
(t = 3.06, p < 0.05, df = 9), in contrast to literal
sentences as described above.
A post-hoc unpaired t-test confirmed that the
performance of visual models in action ROIs
239
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
/
t
a
c
l
/
l
a
r
t
i
c
e
-
p
d
f
/
d
o
i
/
.
1
0
1
1
6
2
/
t
l
a
c
_
a
_
0
0
3
0
7
1
9
2
3
1
8
0
/
/
t
l
a
c
_
a
_
0
0
3
0
7
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
MODELS
OBJECT
VERB
VERBOBJECT
ADDITION
LSTM
VISUAL OBJECT
VISUAL VERB
VISUAL VERBOBJECT
VISUAL ADDITION
VISUAL PHRASE
Visual
Action-related
Language-Related
LLOCT
LVT
LPG
LPM
LMTP
LIFG
0.62
0.54
0.63
0.69*
0.56
0.63
0.65
0.70*
0.74*
0.58
0.63
0.58
0.62
0.65
0.47
0.56
0.66
0.65
0.57
0.62
0.46
0.56
0.49
0.51
0.54
0.68*
0.69*
0.68*
0.69*
0.61
0.53
0.60
0.54
0.59
0.64
0.64
0.75*
0.74*
0.63
0.54
0.67*
0.75*
0.77*
0.73*
0.61
0.73*
0.55
0.62
0.62
0.60
0.67*
0.57
0.64
0.70*
0.51
0.52
0.63
0.67*
0.59
0.54
Table 4: Region of Interest: Metaphor Decoding. Leave-2-out decoding accuracies, significant values (p < 0.05)
surviving FDR correction for multiple comparisons for the ROIs indicated in bold by an asterisk (*).
was significantly higher in metaphor versus literal
decoding (t = 8.92, p < 0.001, df = 18). The results
suggest that the visual models may correlate with
information in action-related brain (e.g., sen-
sorimotor representations). The significant val-
ues reported are those that survived correction for
multiple comparisons in the ROI analysis.
7 Discussion
Addition vs. LSTM We found that the ADDI-
TION model outperformed both lexical models and
the VERBOBJECT model. This suggests that com-
positional semantic models that average seman-
tic representations of the individual words in a
phrase can decode brain activity associated with
sentential meanings, irrespective of whether action-
verbs are used in a literal or metaphoric context.
The findings complement prior work showing that
regression-based models that use word embed-
dings as features can predict brain activity asso-
ciated with larger linguistic units (Wehbe et al.,
2014; Huth et al., 2016; Pereira et al., 2018).
The LSTM, however, did not outperform the
other models. This is surprising given prior work
showing that contextual representations from an
unsupervised LSTM language model outperform
the bag-of-words model (Jain and Huth, 2018).
The authors show increasing performance gains
using representations from the second layer with
longer context lengths (i.e., > 3 字). 然而,
层
using the representations from the last
together with a shorter context window sometimes
showed inferior performance compared to the
word-embedding encoding model. The latter find-
ing is more closely aligned with our own param-
eters and findings. It is possible that the LSTM
model shows the largest performance gain over
the bag-of-words model when predicting brain
activity associated with narrative listening (IE。,
where the subject must keep track of entities and
events over longer periods). 相比之下, our sen-
tence comprehension task depends on the next
word for meaning disambiguation. It is also pos-
sible that semantic models trained in the NLI task
may not be ideally suited for capturing differ-
ences in literal and metaphor processing.
The Role of the Verb and the Object We found
那
the VERB model outperformed the other
型号 (except the ADDITION model) in literal
decoding. 相比之下, in metaphor decoding we
observed that models that incorporate the object
直接地 (IE。, VERBOBJECT and OBJECT models)
这
outperformed the VERB model. 而且,
performance of the VERB model was higher in
literal versus metaphor decoding, while we found
the opposite pattern in metaphor decoding where
the OBJECT model had an advantage. 有可能的
that the VERB model more closely captures the vari-
ance associated with the overall concrete meaning
in the brain. In support of this, the performance of
the linguistic models including that of the VERB
model was higher in action-related brain regions
in literal compared to metaphoric decoding. 在
另一方面, the OBJECT model may best capture
the variance associated with the overall abstract
meaning in the brain. The objects (话题) 在
240
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
t
A
C
我
/
我
A
r
t
我
C
e
–
p
d
F
/
d
哦
我
/
.
1
0
1
1
6
2
/
t
我
A
C
_
A
_
0
0
3
0
7
1
9
2
3
1
8
0
/
/
t
我
A
C
_
A
_
0
0
3
0
7
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
7
S
e
p
e
米
乙
e
r
2
0
2
3
metaphoric sentences tend to be more abstract and
capture the overall aboutness of the metaphoric
meaning to a greater extent than the verb (vehicle).
In support of this,
in metaphor decoding the
linguistic models exhibited a higher performance
in language-related areas than within visual and
action-related areas. Critically, we restricted anal-
ysis to voxels showing maximum variance be-
tween the univariate brain response of literal and
metaphoric categories. 因此, the results mainly
highlight models that can decode literal and
metaphoric sentences to the extent that they are
able to identify the largest differences between
literal and metaphor processing in the brain, 更多的
一般来说. 所以, the results do not necessarily
suggest that the VERB model, 例如, is not an
adequate representation for metaphoric sentences,
just that when distinguishing literal and meta-
phoric processing in the brain it more closely
aligns with representations for literal sentences.
或者, the VERB model may be superior
in capturing the variance associated with the literal
案件, in particular compared to the OBJECT model,
as the verbs were found to be significantly more
经常的
than their arguments for literal sen-
tences in the training corpus. 然而, we also
found that the metaphoric uses of the verbs are
significantly more frequent than the literal uses
in the training corpus likely due to the fact
that written language often reflects more abstract
主题. 然而, we found that the VERB model
showed higher performance in literal compared
to metaphoric decoding suggesting that frequency
of usage in the corpus does not always impact
decoding as might be expected. 重要的, 这
literal and metaphorical sentences did not dif-
fer in familiarity (IE。, subjective frequency) 也不
did we find significant differences in the cloze
probabilities between the literal and metaphoric
phrases in the training corpus suggesting this
broader factor is not at play.
合在一起, the results with the linguis-
tic models suggest that one of the main ways
lexical-semantic similarity differs in literal versus
metaphor processing in the brain is along the
concrete versus abstract dimension, as we might
expect. The results are in line with prior neuro-
scientific studies showing that concrete concepts
recruit more sensorimotor areas, whereas abstract
concepts rely more heavily on language-related
brain regions (Hoffman et al., 2015). More spe-
cifically, the findings are in agreement with the
idea that action-related words and sentences are
embedded in action-perception circuits in the brain
due to co-occurrences between the words and the
action-percepts they denote (Pulvermuller, 2005).
然而, the extent to which action-perception
circuits are recruited may be modulated by the
linguistic context (Desai et al., 2013).
These results also shed light on possible factors
underlying the performance advantage we ob-
served for the ADDITION model over the lexical
型号 (and VERBOBJECT model). The ADDITION
model enhances common features present in the
individual word embeddings of the verb and
物体. 所以, given the preference we
observed for the VERB over the OBJECT in literal
decoding (and vice versa for metaphor decoding),
this suggests that adding the complimentary
embedding largely enhances lexical-semantic re-
lations already present
in either the VERB or
OBJECT alone rather than provide other significant
dimensions of variance, 为他自己. For literal de-
编码, the OBJECT may enhance variance already
associated with the VERB by narrowing the range
of relevant object-directed actions (例如, 行动
on inanimate versus animate objects) highlighting
more concrete information. 相比之下, for meta-
phor decoding it is more likely that the VERB
enhances variance associated with the OBJECT by
narrowing in on abstract uses as opposed to literal
uses of each object (例如, ‘‘writing the poem’’
versus ‘‘grasping the poem’’), highlighting more
abstract information in the process. It should be
noted that this effect may be due to the fact that
we used familiar metaphors well represented in
the corpus, which will need to be investigated in
future work.
Visual Models We observed that the VISUAL
OBJECT and VISUAL ADDITION models performed
well in temporal-occipital areas. These results are
in line with prior work showing that visual models
can decode brain activity associated with concrete
concepts in lateral occipital-temporal areas part
of the ventral visual stream implicated in object
认出 (Anderson et al., 2015). 然而,
this was not specific to literal decoding. 实际上,
we observed that the VISUAL VERB and VISUAL
VERBOBJECT models outperformed the VISUAL
OBJECT model in metaphor decoding. 全面的, 我们
found that the visual models outperformed lin-
guistic models in action-related ROIs in metaphor
decoding. The performance of visual models in
241
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
t
A
C
我
/
我
A
r
t
我
C
e
–
p
d
F
/
d
哦
我
/
.
1
0
1
1
6
2
/
t
我
A
C
_
A
_
0
0
3
0
7
1
9
2
3
1
8
0
/
/
t
我
A
C
_
A
_
0
0
3
0
7
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
7
S
e
p
e
米
乙
e
r
2
0
2
3
action ROIs was also significantly higher in
metaphor versus literal decoding. The latter sug-
gests that the visual models correlate with sen-
sorimotor features and may play a role in metaphor
processing in the brain. This could possibly sug-
gest that different aspects of the literal meaning of
the verb (distinct from its prototypical or salient
literal use) may play a role in metaphor processing
in the brain. These less salient motoric aspects of
the literal meaning captured by the visual verb
models could reflect (A) more abstract sensori-
motor representations such as information about
higher-level action goals or (乙) social-emotional
factors associated with each action, such as infor-
mation about people, 身体, or faces tied to inter-
oceptive experience.
It could also be the case that these aspects
of the literal meaning are not necessarily less
salient or prototypical, but are simply distinct from
the specific literal uses of verbs in our stimuli
(which contained primarily verb predicates with
inanimate objects as arguments). 有可能的
that verb predicates with animate objects as argu-
ments involving social interactions may also be
relevant to the metaphoric meaning. 的确, 一个
important embodied dimension of variance for ab-
stract concepts is social-emotional information
(巴尔萨卢, 2009).
此外, it is possible that differences in
overall visual statistics between our images for
objects versus verbs across literal and metaphorical
sentences may have biased decoding. Kiela et al.
images for concrete objects
(2014) 显示
(less dis-
are more internally homogenous
persed) than that for abstract concepts, 这可能
have impacted the performance of the VISUAL
OBJECT model in metaphor decoding. 重要的,
然而, differences in literal and metaphor de-
coding with the VISUAL VERB model should not
necessarily be impacted by this as the verbs
used were the same. 所以,
那
the visual models in action-related areas over-
all had higher decoding accuracies in metaphor
compared to literal decoding suggests that this
effect is not influenced by image dispersion. Ra-
ther this effect suggests that the VISUAL VERB
may capture sensorimotor features relevant to
metaphor decoding. Future studies will need to
more carefully consider these possible confound-
ing factors and possibly experiment with video
data in place of images.
事实
Accessibilty of the Literal Meaning When only
looking at the linguistic models, the results ap-
pear largely in line with the direct view or a
categorical processing of familiar metaphor in
which the literal meaning is not fully accessible.
The VERB model showed a clear advantage in lit-
eral compared to metaphor decoding. 而且,
the VERB model showed significant decoding accu-
racies in motor areas only in the literal but not
metaphoric case, suggesting that the literal mean-
ing is not being fully simulated in the metaphoric
案件. This aligns with neuroimaging work showing
that literal versus familiar metaphoric actions more
reliably activate motor areas (Desai et al., 2011).
重要的, 然而, we found evidence that
the VERB model showed some significant decoding
accuracies for metaphor decoding in language-
related brain regions (例如, LMTP). Future work
will need to determine whether this reflects dis-
tinct aspects of the literal meaning relevant to
metaphor processing or reflects lexico-semantic
information associated primarily with the more
abstract sense of the verb. Adding to this, 这
poor temporal resolution of fMRI does not permit
looking at different temporal processing stages
和, 所以, cannot rule out the idea that the
literal meaning is initially fully accessed and, 子-
依次地, (部分地) discarded or suppressed.
We also found further evidence to suggest that
the linguistic context may modulate which rep-
resentations associated with the verb are most
accessible. Mainly, we found that visual models
including the VISUAL VERB model were superior
in decoding metaphoric versus literal sentences
in action-related brain areas. This suggests that
different aspects of the literal meaning (possibly
less salient or prototypical literal meanings) 可能
play a role in processing the metaphoric meaning.
因此, while the results do not definitively adju-
dicate between different putative stages of meta-
phor processing, 他们, 尽管如此, inform our
understanding of the debate in that they suggest
that future studies will need to consider (控制
为了) contextual effects of literality and their role
in the study of metaphor comprehension. 为了
实例, it may be useful to present subjects
in the scanner with single words (grasp, push,
ETC。) to assess a prototypical brain response and
then look at how different contexts (literal or
metaphorical) modulate that response over time.
This may reveal different kinds of processing
242
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
t
A
C
我
/
我
A
r
t
我
C
e
–
p
d
F
/
d
哦
我
/
.
1
0
1
1
6
2
/
t
我
A
C
_
A
_
0
0
3
0
7
1
9
2
3
1
8
0
/
/
t
我
A
C
_
A
_
0
0
3
0
7
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
7
S
e
p
e
米
乙
e
r
2
0
2
3
stages and the influence of bottom up (immedi-
ate and automatic) versus top down (context and
inference-driven) influences at play during literal
versus metaphor processing. This would permit
more carefully assessing the role of the literal
meaning in metaphor comprehension.
8 Conclusion and Future Directions
We presented the first study evaluating a range
of semantic models in their ability to decode
brain activity when reading literal and metaphoric
句子. We found evidence to suggest that com-
positional models can decode sentences irre-
spective of figurativeness in the brain and that
at least for the linguistic models the VERB model
may be more closely associated with the literal
(concrete) meaning and the OBJECT model more
closely associated with the metaphoric (抽象的)
意义. This includes a closer relationship be-
tween the VERB model and action-related brain
regions in the brain during literal sentence pro-
cessing, in line with neuroimaging work show-
ing that literal versus familiar metaphoric actions
more reliably activate sensorimotor areas. 这
adds support to the idea that the literal meaning
may not be as accessible for familiar metaphors.
合在一起, the linguistic model results are in
line with prior neuroscientific studies suggesting
that differences between literal and metaphoric
sentence processing align with concrete versus
abstract concept processing in the brain, mainly
with a greater reliance of concrete concepts on
sensorimotor areas, while abstract concepts rely
more heavily on language-related brain regions.
有趣的是, 然而, the results with the visual
models point to the need to also consider how
隐喻 (abstract language) may be grounded
in more abstract knowledge about actions or
social-interaction.
Future studies will need to further investigate
the accessibility of the literal meaning (and ab-
stract meaning) in metaphor comprehension us-
ing a larger dataset. 例如, by considering a
wider range of metaphors (例如, metaphoric uses of
物体) representing different semantic domains
and different degrees of ambiguity. 还, it may
be useful to consider event embeddings optimized
towards learning representations of events and
their thematic roles that may be better able to deal
with different verb senses by learning non-linear
compositions of predicates and their arguments
(Tilk et al., 2016).
参考
安德鲁·J. 安德森, Jeffrey R. Binder, 列奥纳多
Fernandino, Colin J. 汉弗莱斯, Lisa L.
Conant, Mario Aguilar, Xixi Wang, Donias
Doko, and Rajeev D.S. Raizada. 2017A. 预-
dicting neural activity patterns associated with
sentences using a neurobiologically motivated
model of semantic representation. Cerebral
Cortex, 27(9):4379–4395.
安德鲁·J. 安德森, Elia Bruni, Ulisse Bordignon,
Massimo Poesio, and Marco Baroni. 2013.
Of words, eyes and brains: Correlating image-
based distributional semantic models with
neural representations of concepts. In Proceed-
这 2013 Conference on Empirical
ings of
Methods in Natural Language Processing,
pages 1960–1970. Seattle, 华盛顿, 美国.
计算语言学协会.
安德鲁·J. 安德森, Elia Bruni, Alessandro
Lopopolo, Massimo Poesio, and Marco Baroni.
2015. Reading visually embodied meaning
from the brain: Visually grounded compu-
tational models decode visual-object mental
imagery induced by written text. 神经影像,
120:309–322.
安德鲁·J. 安德森, Douwe Kiela, 斯蒂芬
克拉克, and Massimo Poesio. 2017乙. Visually
grounded and textual semantic models differ-
entially decode brain activity associated with
concrete and abstract nouns. Transactions of
the Association for Computational Linguistics,
5:17–30.
安德鲁·J. 安德森, Edmund C. Lalor, 冯
林, Jeffrey R. Binder, Leonardo Fernandino,
Colin J. 汉弗莱斯, Lisa L. Conant, Rajeev
D. S. Raizada, Scott Grimm, and Xixi Wang.
2019. Multiple regions of a cortical network
commonly encode the meaning of words in mul-
tiple grammatical positions of read sentences.
大脑皮层, 29(6):2396–2411.
安德鲁·J. 安德森, Benjamin D. Zinszer,
and Rajeev D.S. Raizada. 2016. Representa-
tional similarity encoding for fMRI: 图案-
to predict brain activity
based synthesis
243
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
t
A
C
我
/
我
A
r
t
我
C
e
–
p
d
F
/
d
哦
我
/
.
1
0
1
1
6
2
/
t
我
A
C
_
A
_
0
0
3
0
7
1
9
2
3
1
8
0
/
/
t
我
A
C
_
A
_
0
0
3
0
7
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
7
S
e
p
e
米
乙
e
r
2
0
2
3
using stimulus-model-similarities. 神经影像,
128:44–53.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua
本吉奥. 2014. Neural machine translation by
jointly learning to align and translate. CoRR,
abs/1409.0473.
Valentina Bambini, Chiara Bertini, Walter
Schaeken, Alessandra Stella, and Francesco
Di Russo. 2016. Disentangling metaphor from
语境: An ERP study. Frontiers in Psychol-
奥吉, 7:559.
Lawrence W. 巴尔萨卢. 2008. Grounded cognition.
心理学年度评论, 59(1):617–645.
PMID: 17705682,
Lawrence W. 巴尔萨卢. 2009. Simulation, situ-
ated conceptualization, and prediction. Philo-
sophical Transactions of
the Royal Society
of London. Series B, Biological Sciences,
364(1521):1281–89.
Yoav Benjamini and Yosef Hochberg. 1995.
Controlling the false discovery rate: A practical
and powerful approach to multiple testing.
Journal of the Royal Statistical Society. Series
乙 (Methodological), 57(1):289–300.
Lior Bugatus, Kevin S. 韦纳, and Kalanit
烧烤-斯佩克特. 2017. Task alters category repre-
sentations in prefrontal but not high-level visual
cortex. Neuroimage, 155437–449.
Luana Bulat, Stephen Clark, and Ekaterina
Shutova. 2017, 九月. Speaking, seeing,
理解: Correlating semantic models
with conceptual representation in the brain.
这 2017 会议
在诉讼程序中
Empirical Methods in Natural Language Pro-
cessing, pages1092–1102. 哥本哈根, 丹麦.
计算语言学协会.
Francesca Carota, Nikolaus Kriegeskorte, Hamed
Nili, and Friedemann Pulvermuller. 2017. Rep-
resentational similarity mapping of distribu-
tional semantics in left inferior frontal, 中间
颞, and motor cortex. 大脑皮层,
27(1):294–309.
Kai-min Kevin Chang, Tom Mitchell, 和
Marcel Adam Just. 2010. Quantitative model-
ing of the neural representation of objects:
How semantic feature norms can account for
fMRI activation. Neuroimage: Special Issue
on Multi-variate Deciding and Brain Reading,
56(2):716–727.
Jeffrey R. Binder, Rutvik H. Desai, William W.
格雷夫斯, and Lisa L. Conant. 2009. Where is
the semantic system? A critical review and
meta-analysis of 120 functional neuroimaging
学习. 大脑皮层, 19(12):2767–96.
Rutvik H. Desai, Jeffrey R. Binder, Lisa L. Conant,
Quintino R. Mano, and Mark S. 塞登伯格.
2011. The neural career of sensory-motor me-
taphors. 认知神经科学杂志,
23(9):2376–86.
Jeffrey R. Binder, Chris F. Westbury, 爱德华
时间. Possing, Kristen A. McKiernan, and David
A. Medler. 2005. Distinct brain systems for pro-
cessing concrete and abstract concepts. 杂志
of Cognitive Neuroscience, 17(6):905–17.
Rutvik H. Desai, Lisa L. Conant, Jeffrey R. Binder,
Haeil Park, and Mark S. 塞登伯格. 2013. A
piece of the action: Modulation of sensory-
motor regions by action idioms and metaphors.
神经影像, 83:862–69.
Samuel R. Bowman, Gabor Angeli, Christopher
波茨, and Christopher D. 曼宁. 2015. A
large annotated corpus for learning natural
language inference. CoRR, abs/1508.05326.
Samuel R. Bowman, Jon Gauthier, Abhinav Rastogi,
Raghav Gupta, Christopher D. 曼宁, 和
Christopher Potts. 2016, aug. A fast unified
model for parsing and sentence understanding.
In Proceedings of the 54th Annual Meeting of
the Association for Computational Linguistics
(体积 1: Long Papers), pages 1466–1477.
柏林, 德国. Association for Computa-
tional Linguistics.
244
Barry Devereux, Colin Kelly,
and Anna
科尔霍宁. 2010. Using fMRI activation to
to evaluate methods for
conceptual stimuli
从
extracting conceptual
语料库. In Proceedings of the NAACL HLT
2010 First Workshop on Computational Neuro-
语言学, pages 70–78. 天使们, 美国.
计算语言学协会.
陈述
Barry J. Devereux, Lorraine K. Tyler, Jeroen
Geertzen, and Billi Randall. 2014. The Centre
for Speech, Language and the Brain (CSLB)
concept property norms. Behavior Research
方法, 46(4):1–9.
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
t
A
C
我
/
我
A
r
t
我
C
e
–
p
d
F
/
d
哦
我
/
.
1
0
1
1
6
2
/
t
我
A
C
_
A
_
0
0
3
0
7
1
9
2
3
1
8
0
/
/
t
我
A
C
_
A
_
0
0
3
0
7
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
7
S
e
p
e
米
乙
e
r
2
0
2
3
Vesna G. Djokic, Ekaterina Shutova, Elisabeth
Wehling, Benjamin Bergen, and Lisa Aziz-
Zadeh. 即将推出. Affirmation and negation
of metaphorical actions in the brain.
Evalina Fedorenko, Alfonso Nieto-Castanon, 和
Nancy Kanwisher. 2012. Lexical and syn-
tactic representations in the brain: An fMRI
investigation with multivoxel pattern analyses.
Neuropsychologia, 4(50):499–513.
Evelina Fedorenko, Michael K. Behra, and Nancy
Kanwisher. 2011. Functional specificity for
high-level linguistic processing in the human
脑. 国家科学院院刊
of Sciences of the United States of America,
108(39):16428–33.
Leonardo Fernandino, Colin J. 汉弗莱斯,
Mark S. 塞登伯格, William L. 总的, Lisa
L. Conant, and Jeffrey R. Binder. 2015. 预-
dicting brain activation patterns associated
with individual lexical concepts based on five
sensory-motor attributes. Neuropsychologia,
76:17–26.
Dedre Gentner and Brian F. Bowdle. 2005. 这
career of metaphor. 心理评论,
112(1):193–216.
Sam Glucksberg. 2003. The psycholinguistics
of metaphor. 认知科学的趋势,
2(7):92–96.
Sepp Hochreiter
and J¨urgen Schmidhuber.
Long short-term memory. 神经计算,
9(8):1735–1780.
cessing Systems 31, pages 6628–6637, 柯兰
Associates, Inc.
Marcel A. Just, Jing Wang, and Vladimir L.
Cherkassy. 2017. Neural representations of the
concepts in simple sentences: Concept acti-
vation prediction and context effects. Neuro-
图像, 157:511–520.
David Kemmerer, Javier G. Castillo, 托马斯
Talavage, Stephanie Patterson, and Cynthia
威利. 2008. Neuroanatomical distribution of
five semantic components of verbs evidence
from fmri. Brain Language, 107(1):16–43.
Douwe Kiela. 2016. MMFeat: A toolkit for ex-
tracting multi-modal features. 在诉讼程序中
ACL-2016 System Demonstrations, pages 55–60,
柏林, 德国. Association for Computa-
tional Linguistics.
Douwe Kiela, Felix Hill, Anna Korhonen, 和
Stephen Clark. 2014. Improving multi-modal
representations using image dispersion: 为什么
less is sometimes more. 在诉讼程序中
52nd Annual Meeting of the Association for
计算语言学 (体积 2: Short
文件), pages 835–841. 巴尔的摩, Maryland.
计算语言学协会.
Diederik P. Kingma and Jimmy Ba. 2014. 亚当:
A method for stochastic optimization. CoRR,
abs/1412.6980.
George Lakoff. 1980. Metaphors We Live By,
芝加哥大学出版社, 芝加哥.
Paul Hoffman, 理查德·J. Binney, and Lambon
Ralph Matthew A. 2015. Differing contribu-
tions of inferior prefrontal and anterior tempo-
ral cortex to concrete and abstract conceptual
知识. Cortex, 63:250–66.
Ken McRae, 乔治小号. Cree, Mark S. 塞登伯格,
and Chris McNorgan. 2005. Semantic feature
production norms for a large set of living and
nonliving things. Behavior Research Methods,
37(4):547–559.
Alexander G. Huth, Wendy A. de Heer, 托马斯·L.
Griffiths, Frederic E. Theunissen, and Jack L.
Gallant. 2016. Natural speech reveals the se-
mantic maps that title human cerebral cortex.
自然, 532(7600):453–458.
Shailee Jain and Alexander Huth. 2018, Incor-
porating context into language encoding models
for fmri, S. 本吉奥, H. 瓦拉赫, H. 拉罗谢尔,
K. Grauman, 氮. Cesa-Bianchi, 和R. 加内特,
编辑, Advances in Neural Information Pro-
Tomas Mikolov, Kai Chen, Greg Corrado, 和
Jeffrey Dean. 2013. Efficient estimation of
word representations in vector space. arXiv,
abs/1301.3781v3
Tom M. 米切尔, Svetlana V. Shinkareva,
Andrew Carlson, Kai-Min Chang, Vicente L.
Malave, Robert A. 石匠, and Marcel Adam
Just. 2008. Predicting human brain activity
associated with the meanings of nouns. 科学,
320(5880):1191–1195.
245
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
t
A
C
我
/
我
A
r
t
我
C
e
–
p
d
F
/
d
哦
我
/
.
1
0
1
1
6
2
/
t
我
A
C
_
A
_
0
0
3
0
7
1
9
2
3
1
8
0
/
/
t
我
A
C
_
A
_
0
0
3
0
7
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
7
S
e
p
e
米
乙
e
r
2
0
2
3
黄, Andrej Karpathy, Aditya Khosla,
Michael Bernstein, Alexander C. 伯格, 和
Li Fei-Fei. 2015. ImageNet Large Scale Visual
Recognition Challenge. IJCV, 115(3):211–252.
大卫·S。. Sabsevitz, 赋予生命. Medler, 迈克尔
塞登伯格, and Jeffrey R. Binder. 2005. Mod-
ulation of the semantic system by word image-
能力. 神经影像, 27(1):188–200.
Ottokar Tilk, Vera Demberg, Asad Sayeed,
Dietrich Klakow, and Stefan Thater. 2016.
Event participant modelling with neural net-
作品. 在诉讼程序中 2016 会议
on Empirical Methods in Natural Language
加工, pages 171–182. Austin, 德克萨斯州.
计算语言学协会.
Jing Wang, Vladimir L. Cherkassky, and Marcel
A. Just. 2017. Predicting the brain activa-
tion pattern associated with the propositional
content of a sentence: Modeling neural repre-
sentations of events and states. 人脑
测绘, 38(10):4865–4881.
Leila Wehbe, Brian Murphy, Partha Talukdar,
Alona Fyshe, Aaditya Ramdas, and Tom
米切尔. 2014. Simultaneously uncovering the
patterns of brain regions involved in differ-
ent story reading subprocesses. PLoS ONE,
9(1):e112575.
Yangwen Xu, Qixiang Lin, Zaizhu Han, Yong
他, and Yanchao Bi. 2016. Intrinsic functional
network architecture of human semantic pro-
cessing: Modules and hubs. 神经影像,
132:542–55.
Saif Mohammad, Ekaterina Shutova, and Peter
特尼. 2016. Metaphor as a medium for
In Proceed-
情感: An empirical study.
ings of the Fifth Joint Conference on Lexi-
cal and Computational Semantics, pages 23–33.
柏林, 德国. Association for Computa-
tional Linguistics.
Allan Paivio. 1971. Imagery and Verbal Pro-
过程, 霍尔特, Rinehart, & Winston, 纽约.
杰弗里
Socher,
Pennington, 理查德
和
Christopher Manning. 2014. GloVe: 全球的
vectors for word representation. In Proceedings
的 2014 经验方法会议
自然语言处理博士 (EMNLP),
pages 1532–1543. Doha, Qatar. 协会
计算语言学.
Francisco Pereira, Matthew Botvinick, and Greg
Detre. 2013. Using Wikipedia to learn semantic
feature representations of concrete concepts
in neuroimaging experiments. Artificial Intel-
智慧, 194:240–252.
Francisco Pereira, Bin Lou, Brianna Pritchett,
Samuel Ritter, Samuuel J. 格什曼, 南希
Kanwisher, Matthew Botvinick, and Evelina
Fedorenko. 2018. Toward a universal decoder
of linguistic meaning from brain activation.
Nature Communications, 9:963.
Friedemann Pulvermuller. 2005. Brain mecha-
nisms linking language and action. Nature Re-
views Neuroscience, 6:576–582.
Olga Russakovsky, Jia Deng, Hao Su, 乔纳森
克劳斯, Sanjeev Satheesh, Sean Ma, Zhiheng
246
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
t
A
C
我
/
我
A
r
t
我
C
e
–
p
d
F
/
d
哦
我
/
.
1
0
1
1
6
2
/
t
我
A
C
_
A
_
0
0
3
0
7
1
9
2
3
1
8
0
/
/
t
我
A
C
_
A
_
0
0
3
0
7
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
7
S
e
p
e
米
乙
e
r
2
0
2
3
下载pdf