Attentive Convolution:
Equipping CNNs with RNN-style Attention Mechanisms
Wenpeng Yin
Department of Computer and Information
Scienza, University of Pennsylvania
wenpeng@seas.upenn.edu
Hinrich Schütze
Center for Information and Language
in lavorazione, LMU Munich, Germany
inquiries@cislmu.org
Astratto
In NLP, convolutional neural networks
(CNN) have benefited less than recur-
rent neural networks (RNNs) from attention
mechanisms. We hypothesize that this is be-
cause the attention in CNNs has been mainly
implemented as attentive pooling (cioè., it is
applied to pooling) rather than as attentive
convolution (cioè., it is integrated into con-
volution). Convolution is the differentiator
of CNNs in that it can powerfully model
the higher-level representation of a word by
taking into account its local fixed-size con-
text in the input text tx.
In this work, we
propose an attentive convolution network,
ATTCONV. It extends the context scope of
the convolution operation, deriving higher-
level features for a word not only from
local context, but also from information ex-
tracted from nonlocal context by the atten-
tion mechanism commonly used in RNNs.
This nonlocal context can come (io) from
parts of the input text tx that are distant
O (ii) from extra (cioè., external) contesti
ty. Experiments on sentence modeling with
zero-context (sentiment analysis), single-
context (textual entailment) and multiple-
context (claim verification) demonstrate the
effectiveness of ATTCONV in sentence rep-
resentation learning with the incorporation
of context. In particular, attentive convo-
lution outperforms attentive pooling and
is a strong competitor to popular attentive
RNNs.1
1
introduzione
Natural language processing (PNL) has benefited
greatly from the resurgence of deep neural net-
works (DNNs), thanks to their high performance
with less need of engineered features. A DNN typ-
ically is composed of a stack of non-linear trans-
1https://github.com/yinwenpeng/Attentive_
Convolution.
687
formation layers, each generating a hidden rep-
resentation for the input by projecting the output
of a preceding layer into a new space. To date,
building a single and static representation to ex-
press an input across diverse problems is far from
satisfactory. Invece, it is preferable that the rep-
resentation of the input vary in different applica-
tion scenarios. In response, attention mechanisms
(Graves, 2013; Graves et al., 2014) have been pro-
posed to dynamically focus on parts of the in-
put that are expected to be more specific to the
problem. They are mostly implemented based on
fine-grained alignments between two pieces of ob-
jects, each emitting a dynamic soft-selection to the
components of the other, so that the selected ele-
ments dominate in the output hidden representa-
zione. Attention-based DNNs have demonstrated good
performance on many tasks.
Reti neurali convoluzionali (CNN; LeCun
et al., 1998) and recurrent neural networks (RNNs;
Elman, 1990) are two important types of DNNs.
Most work on attention has been done for RNNs.
Attention-based RNNs typically take three types
of inputs to make a decision at the current step:
(io) the current input state, (ii) a representation of
local context (computed unidirectionally or bidi-
rectionally; Rocktäschel et al.
[2016]), E (iii)
the attention-weighted sum of hidden states cor-
responding to nonlocal context (per esempio., the hidden
states of the encoder in neural machine translation;
Bahdanau et al. [2015]). An important question,
Perciò, is whether CNNs can benefit from such
an attention mechanism as well, and how. This is
our technical motivation.
Our second motivation is natural language un-
derstanding. In generic sentence modeling without
extra context (Collobert et al., 2011; Kalchbrenner
et al., 2014; Kim, 2014), CNNs learn sentence rep-
resentations by composing word representations
that are conditioned on a local context window.
We believe that attentive convolution is needed
Operazioni dell'Associazione per la Linguistica Computazionale, vol. 6, pag. 687–702, 2018. Redattore di azioni: Slav Petrov.
Lotto di invio: 6/2018; Lotto di revisione: 10/2018; Pubblicato 12/2018.
C(cid:13) 2018 Associazione per la Linguistica Computazionale. Distribuito sotto CC-BY 4.0 licenza.
l
D
o
w
N
o
UN
D
e
D
F
R
o
M
H
T
T
P
:
/
/
D
io
R
e
C
T
.
M
io
T
.
e
D
tu
/
T
UN
C
l
/
l
UN
R
T
io
C
e
–
P
D
F
/
D
o
io
/
.
1
0
1
1
6
2
/
T
l
UN
C
_
UN
_
0
0
2
4
9
1
5
6
7
6
7
6
/
/
T
l
UN
C
_
UN
_
0
0
2
4
9
P
D
.
F
B
sì
G
tu
e
S
T
T
o
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3
premise, modeled as context ty
Plant cells have structures that animal cells lack.
Animal cells do not have cell walls.
The cell wall is not a freestanding structure.
Plant cells possess a cell wall, animals never.
0
1
0
1
Tavolo 1: Examples of four premises for the hypothesis
tx = “A cell wall is not present in animal cells.” in
SCITAIL data set. Right column (hypothesis’s label):
“1” means true, “0” otherwise.
for some natural language understanding tasks that
are essentially sentence modeling within contexts.
Esempi: textual entailment (is a hypothesis true
given a premise as the single context?; Dagan
et al. [2013]) and claim verification (is a claim cor-
rect given extracted evidence snippets from a text
corpus as the context?; Thorne et al. [2018]). Contro-
sider the SCITAIL (Khot et al., 2018) textual en-
tailment examples in Table 1; here, the input text
tx is the hypothesis and each premise is a context
text ty. And consider the illustration of claim ver-
ification in Figure 1; here, the input text tx is the
claim and ty can consist of multiple pieces of con-
testo. In entrambi i casi, we would like the representa-
tion of tx to be context-specific.
In this work, we propose attentive convolution
networks, ATTCONV, to model a sentence (cioè.,
tx) either in intra-context (where ty = tx) or extra-
context (where ty (cid:54)= tx and ty can have many
pieces) scenarios. In the intra-context case (sen-
timent analysis, Per esempio), ATTCONV extends
the local context window of standard CNNs to
cover the entire input text tx. In the extra-context
case, ATTCONV extends the local context win-
dow to cover accompanying contexts ty.
For a convolution operation over a window
in tx such as (leftcontext, word, rightcontext), we
first compare the representation of word with
all hidden states in the context ty to obtain
an attentive context representation attcontext, Poi
convolution filters derive a higher-level represen-
tation for word, denoted as wordnew, by integrat-
ing word with three pieces of context: leftcontext,
rightcontext, and attcontext. We interpret
this at-
tentive convolution in two perspectives.
(io) For
intra-context, a higher-level word representation
wordnew is learned by considering the local (cioè.,
leftcontext and rightcontext) as well as nonlocal (cioè.,
attcontext) context. (ii) For extra-context, wordnew
is generated to represent word,
together with
its cross-text alignment attcontext, in the context
leftcontext and rightcontext. In other words, the deci-
sion for the word is made based on the connected
Figura 1: Verify claims in contexts.
hidden states of cross-text aligned terms, con
local context.
We apply ATTCONV to three sentence mod-
eling tasks with variable-size context: a large-
scale Yelp sentiment classification task (Lin et al.,
2017) (intra-context, cioè., no additional context),
SCITAIL textual entailment (Khot et al., 2018)
(single extra-context),
and claim verification
(Thorne et al., 2018) (multiple extra-contexts).
ATTCONV outperforms competitive DNNs with
and without attention and achieves state-of-the-art
on the three tasks.
Overall, we make the following contributions:
• This is the first work that equips convolution
filters with the attention mechanism com-
monly used in RNNs.
• We distinguish and build flexible modules—
attention source, attention focus, and atten-
tion beneficiary—to greatly advance the ex-
pressivity of attention mechanisms in CNNs.
• ATTCONV provides a new way to broaden
the originally constrained scope of filters in
conventional CNNs. Broader and richer con-
text comes from either external context (cioè.,
ty) or the sentence itself (cioè., tx).
• ATTCONV shows its flexibility and effec-
tiveness in sentence modeling with variable-
size context.
2 Related Work
In this section we discuss attention-related DNNs
in NLP, the most relevant work for our paper.
2.1 RNNs with Attention
Graves (2013) and Graves et al. (2014) first in-
troduced a differentiable attention mechanism that
allows RNNs to focus on different parts of the
input. This idea has been broadly explored in
to deal with text
RNNs, shown in Figure 2,
generation, such as neural machine translation
688
l
D
o
w
N
o
UN
D
e
D
F
R
o
M
H
T
T
P
:
/
/
D
io
R
e
C
T
.
M
io
T
.
e
D
tu
/
T
UN
C
l
/
l
UN
R
T
io
C
e
–
P
D
F
/
D
o
io
/
.
1
0
1
1
6
2
/
T
l
UN
C
_
UN
_
0
0
2
4
9
1
5
6
7
6
7
6
/
/
T
l
UN
C
_
UN
_
0
0
2
4
9
P
D
.
F
B
sì
G
tu
e
S
T
T
o
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3
maybe haoyofajrfngnovajrnvharyaojnbarlvjhnhjarnohg nvhyhnv va jmaybe haoyofajrfngnovajrnvharyaojnbarlvjhnhjarnohg nvhyhnv va jwhatno jmofjag as ajgonahnbjunaeorg varguoerguarg ag .arghoguerng mao rhg aerare hn kvarenb bhebnbnjb ye nerbhbjanrihbjrbn arebahofjrf Marilyn Monroe worked withWarner BrothersTelemundo is anEnglish-languagetelevision network. c1c2cicn…contextsclaimclasses
Figura 2: A simplified illustration of attention mecha-
nism in RNNs.
(Bahdanau et al., 2015; Luong et al., 2015; Kim
et al., 2017; Libovický and Helcl, 2017), risposta
generation in social media (Shang et al., 2015),
document reconstruction (Li et al., 2015), E
document summarization (Nallapati et al., 2016);
machine comprehension (Hermann et al., 2015;
Kumar et al., 2016; Xiong et al., 2016; Seo et al.,
2017; Wang and Jiang, 2017; Xiong et al., 2017;
Wang et al., 2017UN); and sentence relation classi-
fication, such as textual entailment (Cheng et al.,
2016; Rocktäschel et al., 2016; Wang and Jiang,
2016; Wang et al., 2017B; Chen et al., 2017B) E
answer sentence selection (Miao et al., 2016).
We try to explore the RNN-style attention mech-
anisms in CNNs—more specifically, in convolution.
2.2 CNNs with Attention
In NLP, there is little work on attention-based
CNN. Gehring et al. (2017) propose an attention-
based convolutional seq-to-seq model for machine
translation. Both the encoder and decoder are hi-
erarchical convolution layers. At the nth layer of
the decoder, the output hidden state of a convolu-
tion queries each of the encoder-side hidden states,
then a weighted sum of all encoder hidden states
is added to the decoder hidden state, and finally
this updated hidden state is fed to the convolution
at layer n + 1. Their attention implementation re-
lies on the existence of a multi-layer convolution
structure—otherwise the weighted context from
the encoder side could not play a role in the de-
coder. So essentially their attention is achieved af-
ter convolution. In contrasto, we aim to modify the
vanilla convolution, so that CNNs with attentive
convolution can be applied more broadly.
We discuss two systems that are representative
of CNNs that implement the attention in pooling
(cioè., the convolution is still not affected): Yin
et al. (2016) and dos Santos et al. (2016), illus-
trated in Figure 3. Specifically, these two systems
work on two input sentences, each with a set of
Figura 3: Attentive pooling,
summarized from
ABCNN (Yin et al., 2016) and APCNN (dos Santos
et al., 2016).
hidden states generated by a convolution layer;
Poi, each sentence will learn a weight for ev-
ery hidden state by comparing this hidden state
with all hidden states in the other sentence; finally,
each input sentence obtains a representation by a
weighted mean pooling over all its hidden states.
The core component—weighted mean pooling—
was referred to as “attentive pooling,” aiming to
yield the sentence representation.
In contrast to attentive convolution, attentive
pooling does not connect directly the hidden states
of cross-text aligned phrases in a fine-grained
manner to the final decision making; only the
matching scores contribute to the final weighting
in mean pooling. This important distinction be-
tween attentive convolution and attentive pooling
is further discussed in Section 3.3.
Inspired by the attention mechanisms in RNNs,
we assume that it is the hidden states of aligned
phrases rather than their matching scores that can
better contribute to representation learning and deci-
sion making. Hence, our attentive convolution differs
from attentive pooling in that it uses attended hidden
states from extra context (cioè., ty) or broader-range
context within tx to participate in the convolution.
In experiments, we will show its superiority.
3 ATTCONV Model
We use bold uppercase (per esempio., H) for matrices;
bold lowercase (per esempio., H) for vectors; bold lower-
case with index (per esempio., CIAO) for columns of H; E
non-bold lowercase for scalars.
689
l
D
o
w
N
o
UN
D
e
D
F
R
o
M
H
T
T
P
:
/
/
D
io
R
e
C
T
.
M
io
T
.
e
D
tu
/
T
UN
C
l
/
l
UN
R
T
io
C
e
–
P
D
F
/
D
o
io
/
.
1
0
1
1
6
2
/
T
l
UN
C
_
UN
_
0
0
2
4
9
1
5
6
7
6
7
6
/
/
T
l
UN
C
_
UN
_
0
0
2
4
9
P
D
.
F
B
sì
G
tu
e
S
T
T
o
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3
weightedsumattentivecontextsentence tysentence txhiddenstatesconvolutionconvolutioninter-hidden-statematchcolumn-wise compose row-wise compose sentence txsentence tyword embedding layer hidden states layer XYX⋅softmax()Y⋅softmax()representation:txrepresentation:ty (4×6)matchingscores
l
D
o
w
N
o
UN
D
e
D
F
R
o
M
H
T
T
P
:
/
/
D
io
R
e
C
T
.
M
io
T
.
e
D
tu
/
T
UN
C
l
/
l
UN
R
T
io
C
e
–
P
D
F
/
D
o
io
/
.
1
0
1
1
6
2
/
T
l
UN
C
_
UN
_
0
0
2
4
9
1
5
6
7
6
7
6
/
/
T
l
UN
C
_
UN
_
0
0
2
4
9
P
D
.
F
B
sì
G
tu
e
S
T
T
o
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3
(UN) Light attentive convolution layer
(B) Advanced attentive convolution layer
Figura 4: ATTCONV models sentence tx with context ty.
To start, we assume that a piece of text t (t ∈
{tx, ty}) is represented as a sequence of hidden
states hi ∈ Rd (i = 1, 2, . . . , |T|), forming feature
map H ∈ Rd×|T|, where d is the dimensionality
of hidden states. Each hidden state hi has its left
context li and right context ri. In concrete CNN
systems, contexts li and ri can cover multiple adja-
cent hidden states; we set li = hi−1 and ri = hi+1
for simplicity in the following description.
We now describe light and advanced versions
of ATTCONV. Recall that ATTCONVaims to com-
pute a representation for tx in a way that convolu-
tion filters encode not only local context, but also
attentive context over ty.
3.1 Light ATTCONV
Figura 4(UN) shows the light version of ATTCONV.
It differs in two key points—(io) E (ii)—both
from the basic convolution layer that models a sin-
gle piece of text and from the Siamese CNN that
models two text pieces in parallel. (io) A match-
ing function determines how relevant each hidden
state in the context ty is to the current hidden state
hx
in sentence tx. We then compute an average
io
of the hidden states in the context ty, weighted by
the matching scores, to get the attentive context
i for hx
cx
(ii) The convolution for position i in
io .
tx integrates hidden state hx
i with three sources of
context: left context hx
i+1, E
attentive context cx
io .
i−1, right context hx
Attentive Context. Primo, a function generates a
matching score ei,j between a hidden state in tx
and a hidden state in ty by (io) dot product:
NO,j = (hx
io )T · hy
j
(1)
690
O (ii) bilinear form:
NO,j = (hx
io )T Wehy
j
(2)
(where We ∈ Rd×d), O (iii) additive projection:
NO,j = (ve)T · tanh(We · hx
io + Ue · hy
j )
(3)
where We, Ue ∈ Rd×d and ve ∈ Rd.
Given the matching scores, the attentive context
cx
i for hidden state hx
i is the weighted average of
all hidden states in ty:
cx
i =
(cid:88)
j
softmax(NO)j · hy
j
(4)
We refer to the concatenation of attentive contexts
[cx
|tx|] as the feature map Cx ∈
1; . . . ; cx
Rd×|tx| for tx.
io ; . . . ; cx
Attentive Convolution. After attentive context
has been computed, a position i in the sentence tx
io , the left context hx
has a hidden state hx
i−1, IL
i+1, and the attentive context cx
right context hx
io .
Attentive convolution then generates the higher-
level hidden state at position i:
io,new = tanh(W · [hx
hx
= tanh(W1 · [hx
W2 · cx
i−1, hx
i−1, hx
io + B)
io , hx
io , hx
i+1, cx
i+1]+
io ] + B)
(5)
(6)
where W ∈ Rd×4d is the concatenation of W1 ∈
Rd×3d and W2 ∈ Rd×d, b ∈ Rd.
As Equation (6) shows, Equazione (5) can be
achieved by summing up the results of two
separate and parallel convolution steps before
the non-linearity. The first
is still a standard
convolution-without-attention over feature map
Hx by filter width 3 over the window (hx
i−1, hx
io ,
hx
i+1). The second is a convolution on the feature
map Cx (cioè.,
the attentive context) with filter
width 1 (cioè., over each cx
io ); then we sum up the
cihihi+1sentence txcontext tyattentivecontextattentiveconvolutionLayernLayern+1hi-1matchingattentivecontextattentiveconvolutionfbene(Hx)fmgran(Hx)fmgran(Hy)LayernLayern+1sourcefocusbeneficiarysentence txcontext ty
role
premise
hypothesis
testo
Three firefighters come out of subway station
Three firefighters putting out a fire inside
of a subway station
Tavolo 2: Multi-granular alignments required in textual
entailment.
results element-wise and add a bias term and the non-
linearity. This divide-then-compose strategy makes
the attentive convolution easy to implement in
practice, with no need to create a new feature map,
as required in Equation (5), to integrate Hx and Cx.
It is worth mentioning that W1 ∈ Rd×3d cor-
responds to the filter parameters of a vanilla CNN
and the only added parameter here is W2 ∈ Rd×d,
which only depends on the hidden size.
This light ATTCONV shows the basic princi-
ples of using RNN-style attention mechanisms in
convolution. Our experiments show that this light
version of ATTCONV—even though it incurs a
limited increase of parameters (cioè., W2)—works
much better than the vanilla Siamese CNN and
some of the pioneering attentive RNNs. The fol-
lowing two considerations show that there is space
to improve its expressivity.
(io) Higher-level or more abstract representa-
tions are required in subsequent layers. We find
that directly forwarding the hidden states in tx or
ty to the matching process does not work well in
some tasks. Pre-learning some more high-level or
abstract representations helps in subsequent learn-
ing phases.
(ii) Multi-granular alignments are preferred
in the interaction modeling between tx and ty.
Tavolo 2 shows another example of textual entail-
ment. On the unigram level, “out” in the premise
matches with “out” in the hypothesis perfectly,
whereas “out” in the premise is contradictory
to “inside” in the hypothesis. But their context
snippets—“come out” in the premise and “putting
out a fire” in the hypothesis—clearly indicate
that they are not semantically equivalent. And the
gold conclusion for this pair is “neutral” (cioè.,
the hypothesis is possibly true). Therefore, matching
should be conducted across phrase granularities.
We now present advanced ATTCONV. It is more
expressive and modular, based on the two forego-
ing considerations (io) E (ii).
3.2 Advanced ATTCONV
Adel and Schütze (2017) distinguish between
focus and source of attention. The focus of atten-
tion is the layer of the network that is reweighted
by attention weights. The source of attention is the
information source that is used to compute the
attention weights. Adel and Schütze showed that
increasing the scope of the attention source is
beneficial. It possesses some preliminary princi-
ples of the query/key/value distinction by Vaswani
et al. (2017). Here, we further extend this princi-
ple to define beneficiary of attention – the feature
map (labeled “beneficiary” in Figure 4(B)) Quello
is contextualized by the attentive context (labeled
“attentive context” in Figure 4(B)).
In the light
IL
attentive convolutional layer (Figura 4(UN)),
source of attention is hidden states in sentence tx,
the focus of attention is hidden states of the con-
text ty, and the beneficiary of attention is again
the hidden states of tx; questo è, it is identical to the
source of attention.
We now try to distinguish these three con-
cepts further to promote the expressivity of an at-
tentive convolutional layer. We call it “advanced
ATTCONV”; Guarda la figura 4(B). It differs from the
light version in three ways: (io) attention source is
learned by function fmgran(Hx), feature map Hx
of tx acting as input; (ii) attention focus is learned
by function fmgran(Hy), feature map Hy of con-
text ty acting as input; E (iii) attention benefi-
ciary is learned by function fbene(Hx), Hx acting
as input. Both functions fmgran() and fbene() are
based on a gated convolutional function fgconv():
oi = tanh(Wh · ii + bh)
gi = sigmoid(Wg · ii + bg)
fgconv(ii) = gi · ui + (1 − gi) · oi
(7)
(8)
(9)
where ii is a composed representation, denoting
a generally defined input phrase [· · · , ui, · · · ] Di
arbitrary length with ui as the central unigram-
level hidden state, and the gate gi sets a trade-off
between the unigram-level input ui and the tem-
porary output oi at the phrase-level. We elaborate
these modules in the remainder of this subsection.
Attention Source. Primo, we present a general
instance of generating source of attention by func-
tion fmgran(H),
learning word representations
in multi-granular context. In our system, we con-
sider granularities 1 E 3, corresponding to
unigram hidden state and trigram hidden state. For
the uni-hidden state case, it is a gated convolution
layer:
uni,i = fgconv(hx
hx
io )
(10)
691
l
D
o
w
N
o
UN
D
e
D
F
R
o
M
H
T
T
P
:
/
/
D
io
R
e
C
T
.
M
io
T
.
e
D
tu
/
T
UN
C
l
/
l
UN
R
T
io
C
e
–
P
D
F
/
D
o
io
/
.
1
0
1
1
6
2
/
T
l
UN
C
_
UN
_
0
0
2
4
9
1
5
6
7
6
7
6
/
/
T
l
UN
C
_
UN
_
0
0
2
4
9
P
D
.
F
B
sì
G
tu
e
S
T
T
o
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3
For the tri-hidden state case:
tri,i = fgconv([hx
hx
i−1, hx
io , hx
i+1])
(11)
Finalmente, the overall hidden state at position i is the
concatenation of huni,i and htri,io:
uni,io, hx
tri,io]
(12)
mgran,i = [hx
hx
questo è, fmgran(Hx) = Hx
mgran.
Such a kind of comprehensive hidden state can
encode the semantics of multigranular spans at
a position, such as “out” and “come out of.”
Gating here implicitly enables cross-granular
alignments in subsequent attention mechanism as
it sets highway connections (Srivastava et al.,
2015) between the input granularity and the output
granularity.
Attention Focus. For simplicity, we use the
same architecture for the attention source (just in-
troduced) and for the attention focus, ty (cioè., for
the attention focus: fmgran(Hy) = Hy
mgran; Vedere
Figura 4(B)). Così, the focus of attention will
participate in the matching process as well as be
reweighted to form an attentive context vector. Noi
leave exploring different architectures for atten-
tion source and focus for future work.
Another benefit of multi-granular hidden states
in attention focus is to keep structure information in
the context vector. In standard attention mechanisms
in RNNs, all hidden states are average-weighted
as a context vector, and the order information is
missing. By introducing hidden states of larger
granularity into CNNs that keep the local order or
structures, we boost the attentive effect.
Attention Beneficiary. In our system, we sim-
ply use fgconv() over uni-granularity to learn a
more abstract representation over the current hid-
den representations in Hx, so that
fbene(hx
io ) = fgconv(hx
io )
(13)
the attentive context vector cx
Subsequently,
io
is generated based on attention source feature
map fmgran(Hx) and attention focus feature map
fmgran(Hy), according to the description of the
light ATTCONV. Then attentive convolution is
conducted over attention beneficiary feature map
fbene(Hx) and the attentive context vectors Cx to
get a higher-layer feature map for the sentence tx.
3.3 Analysis
Compared with the standard attention mechanism
in RNNs, ATTCONV has a similar matching func-
692
tion and a similar process of computing context
vettori, but differs in three ways. (io) The dis-
crimination of attention source, focus, and ben-
eficiary improves expressivity. (ii) In CNNs, IL
surrounding hidden states for a concrete position
are available, so the attention matching is able to
encode the left context as well as the right con-
testo. In RNNs, Tuttavia, we need bidirectional
RNNs to yield both left and right context
representations. (iii) As attentive convolution can
be implemented by summing up two separate
convolution steps (Equations 5 E 6), this ar-
chitecture provides both attentive representations
and representations computed without
the use
of attention. This strategy is helpful in practice to
use richer representations for some NLP prob-
lems.
In contrasto, such a clean modular separa-
tion of representations computed with and without
attention is harder to realize in attention-based
RNNs.
Prior attention mechanisms explored in CNNs
mostly involve attentive pooling (dos Santos et al.,
2016; Yin et al., 2016); namely, the weights of the
post-convolution pooling layer are determined by
Attenzione. These weights come from the matching
process between hidden states of two text pieces.
Tuttavia, a weight value is not informative enough
to tell the relationships between aligned terms. Contro-
sider a textual entailment sentence pair for which
we need to determine whether “inside −→ outside”
holds. The matching degree (take cosine similar-
ity as example) of these two words is high: for ex-
ample, ≈ 0.7 in Word2Vec (Mikolov et al., 2013)
and GloVe (Pennington et al., 2014). On the other
hand, the matching score between “inside” and
“in” is lower: 0.31 in Word2Vec, 0.46 in GloVe.
Apparently, the higher number 0.7 does not mean
that “outside” is more likely than “in” to be en-
tailed by “inside.” Instead, joint representations
for aligned phrases [hinside, houtside], [hinside, hin]
are more informative and enable finer-grained rea-
soning than a mechanism that can only transmit
information downstream by matching scores. Noi
modify the conventional CNN filters so that “in-
side” can make the entailment decision by looking
at the representation of the counterpart term (“out-
side” or “in”) rather than a matching score.
A more damaging property of attentive pooling
is the following. Even if matching scores could
convey the phrase-level entailment degree to some
extent, matching weights, Infatti, are not lever-
aged to make the entailment decision directly;
l
D
o
w
N
o
UN
D
e
D
F
R
o
M
H
T
T
P
:
/
/
D
io
R
e
C
T
.
M
io
T
.
e
D
tu
/
T
UN
C
l
/
l
UN
R
T
io
C
e
–
P
D
F
/
D
o
io
/
.
1
0
1
1
6
2
/
T
l
UN
C
_
UN
_
0
0
2
4
9
1
5
6
7
6
7
6
/
/
T
l
UN
C
_
UN
_
0
0
2
4
9
P
D
.
F
B
sì
G
tu
e
S
T
T
o
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3
they are used to weight
instead,
the sum of
the output hidden states of a convolution as the
global sentence representation. In other words,
fine-grained entailment degrees are likely to be
lost in the summation of many vectors. Questo
illustrates why attentive context vectors partici-
pating in the convolution operation are expected
to be more effective than post-convolution atten-
tive pooling (more explanations in §4.3, paragraph
“Visualization”).
Intra-context attention and extra-context at-
tention. Figures 4(UN) E 4(B) depict the model-
ing of a sentence tx with its context ty. This is
a common application of attention mechanism in
the literature; we call it extra-context attention.
But ATTCONV can also be applied to model a
single text input, questo è, intra-context attention.
Consider a sentiment analysis example: “With the
2017 NBA All-Star game in the books I think we
can all agree that this was definitely one to re-
member. Not because of the three-point shootout,
the dunk contest, or the game itself but because of
the ludicrous trade that occurred after the festivi-
ties.” This example contains informative points at
different locations (“remember” and “ludicrous”);
conventional CNNs’ ability to model nonlocal de-
pendency is limited because of fixed-size filter
widths. In ATTCONV, we can set ty = tx. IL
attentive context vector then accumulates all re-
lated parts together for a given position. In other
parole, our intra-context attentive convolution is
able to connect all related spans together to form
a comprehensive decision. This is a new way to
broaden the scope of conventional filter widths: UN
filter now covers not only the local window, Ma
also those spans that are related, but are beyond
the scope of the window.
Comparison to Transformer.2 The “focus”
in ATTCONV corresponds to “key” and “value”
in Transformer;
questo è, our versions of “key”
and “value” are the same, coming from the con-
text sentence. The “query” in Transformer cor-
responds to the “source” and “beneficiary” of
ATTCONV; namely, our model has two perspec-
tives to utilize the context: one acts as a real
query (cioè., “source”) to attend the context, IL
other (cioè., “beneficiary”) takes the attentive con-
2Our “source-focus-beneficiary” mechanism was inspired
by Adel and Schütze (2017). Vaswani et al. (2017) later pub-
lished the Transformer model, which has a similar “query-
key-value” mechanism.
text back to improve the learned representation of
itself. If we reduce ATTCONV to unigram convo-
lutional filters, it is pretty much a single Trans-
former layer (if we neglect the positional encoding
in Transformer and unify the “query-key-value”
and “source-focus-beneficiary” mechanisms).
4 Experiments
We evaluate ATTCONV on sentence modeling in
three scenarios:
(io) Zero-context, questo è, intra-
the same input sentence acts as tx as
context;
well as ty; (ii) Single-context,
textual
entailment—hypothesis modeling with a single
premise as the extra-context; E (iii) Multiple-
context, namely, claim verification—claim mod-
eling with multiple extra-contexts.
questo è,
4.1 Common Set-up and Common Baselines
All experiments share a common set-up. The input
is represented using 300-dimensional publicly
available Word2Vec (Mikolov et al., 2013) em-
beddings; out of vocabulary embeddings are ran-
domly initialized. The architecture consists of the
following four layers in sequence: embedding,
attentive convolution, max-pooling, and logistic
regression. The context-aware representation of
tx is forwarded to the logistic regression layer.
We use AdaGrad (Duchi et al., 2011) for training.
Embeddings are fine-tuned during training. Hyper-
parameter values include: learning rate 0.01, hidden
size 300, batch size 50, filter width 3.
All experiments are designed to explore com-
parisons in three aspects: (io) within ATTCONV,
“light” vs. “advanced”; (ii) “attentive convolution”
vs. “attentive pooling”/“attention only”; E (iii)
“attentive convolution” vs. “attentive RNN”.
A tal fine, we always report “light” and
“advanced” ATTCONV performance and compare
against five types of common baselines: (io) w/o
context; (ii) w/o attention; (iii) w/o convolution:
Similar to the Transformer’s principle (Vaswani
et al., 2017), we discard the convolution oper-
ation in Equation (5) and forward the addition
of the attentive context cx
into a
fully connected layer. To keep enough parame-
ters, we stack in total four layers so that “w/o
convolution” has the same size of parameters as
light-ATTCONV; (iv) with attention: RNNs with
attention and CNNs with attentive pooling; E (v)
prior state of the art, typeset in italics.
i and the hx
io
693
l
D
o
w
N
o
UN
D
e
D
F
R
o
M
H
T
T
P
:
/
/
D
io
R
e
C
T
.
M
io
T
.
e
D
tu
/
T
UN
C
l
/
l
UN
R
T
io
C
e
–
P
D
F
/
D
o
io
/
.
1
0
1
1
6
2
/
T
l
UN
C
_
UN
_
0
0
2
4
9
1
5
6
7
6
7
6
/
/
T
l
UN
C
_
UN
_
0
0
2
4
9
P
D
.
F
B
sì
G
tu
e
S
T
T
o
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3
systems
Paragraph Vector
Lin et al. Bi-LSTM
Lin et al. CNN
MultichannelCNN (Kim)
CNN+internal attention
ABCNN
APCNN
Attentive-LSTM
Lin et al. RNN Self-Att.
leggero
w/o convolution
advanced
acc
58.43
61.99
62.05
64.62
61.43
61.36
61.98
63.11
64.21
66.75
61.34
67.36∗
o
/
w
N
o
io
T
N
e
T
T
UN
H
T
io
w
N
o
io
T
N
e
T
T
UN
T
T
UN
V
N
O
C
Tavolo 3: System comparison of sentiment analysis on
Yelp. Significant improvements over state of the art are
marked with ∗ (test of equal proportions, P < 0.05).
4.2 Sentence Modeling with Zero-context:
Sentiment Analysis
We evaluate sentiment analysis on a Yelp bench-
mark released by Lin et al. (2017): review-star
pairs in sizes 500K (train), 2,000 (dev), and 2,000
(test). Most text instances in this data set are
long: 25%, 50%, 75% percentiles are 46, 81,
and 125 words, respectively. The task is five-way
classification: 1 to 5 stars. The measure is accuracy.
We use this benchmark because the predominance
of long texts lets us evaluate the system perfor-
mance of encoding long-range context, and the
system by Lin et al. is directly related to ATTCONV
in intra-context scenario.
Baselines.
(i) w/o attention. Three baselines
from Lin et al. (2017): Paragraph Vector (Le
and Mikolov, 2014) (unsupervised sentence rep-
resentation learning), BiLSTM, and CNN. We
also reimplement MultichannelCNN (Kim, 2014),
recognized as a simple but surprisingly strong
sentence modeler. (ii) with attention. A vanilla
“Attentive-LSTM” by Rocktäschel et al. (2016).
“RNN Self-Attention” (Lin et al., 2017) is di-
rectly comparable to ATTCONV: it also uses intra-
context attention. “CNN+internal attention” (Adel
and Schütze, 2017), an intra-context attention idea
similar to, but less complicated than, Lin et al.
(2017). ABCNN & APCNN – CNNs with atten-
tive pooling.
Results and Analysis. Table 3 shows that
advanced-ATTCONV surpasses its “light” coun-
terpart, and obtains significant improvement over
the state of the art.
Figure 5: ATTCONV vs. MultichannelCNN for
lengths.
groups of Yelp text with ascending text
ATTCONV performs more robustly across different
lengths of text.
In addition, ATTCONV surpasses attentive pool-
ing (ABCNN&APCNN) with a big margin (>5%)
and outperforms the representative attentive-LSTM
(>4%).
Inoltre,
it outperforms the two self-
attentive models: CNN+internal attention (Adel
and Schütze, 2017) and RNN Self-Attention (Lin
et al., 2017), which are specifically designed
for single-sentence modeling. Adel and Schütze
(2017) generate an attention weight for each CNN
hidden state by a linear transformation of the same
hidden state, then compute weighted average over
all hidden states as the text representation. Lin
et al. (2017) extend that idea by generating a
group of attention weight vectors, then RNN hid-
den states are averaged by those diverse weighted
vettori, allowing extracting different aspects of
the text into multiple vector representations. Both
works are essentially weighted mean pooling, sim-
ilar to the attentive pooling in Yin et al. (2016) E
dos Santos et al. (2016).
Prossimo, we compare ATTCONV with Multichan-
nelCNN,
the strongest baseline system (“w/o
attention”), for different length ranges to check
whether ATTCONV can really encode long-range
context effectively. We sort the 2,000 test instances
by length, then split them into 10 groups, each
consisting of 200 instances. Figura 5 shows per-
formance of ATTCONV vs. MultichannnelCNN.
We observe that ATTCONV consistently outper-
forms MultichannelCNN for all lengths. Further-
more, the improvement over MultichannelCNN
generally increases with length. This is evidence
that ATTCONV more effectively models long text.
694
l
D
o
w
N
o
UN
D
e
D
F
R
o
M
H
T
T
P
:
/
/
D
io
R
e
C
T
.
M
io
T
.
e
D
tu
/
T
UN
C
l
/
l
UN
R
T
io
C
e
–
P
D
F
/
D
o
io
/
.
1
0
1
1
6
2
/
T
l
UN
C
_
UN
_
0
0
2
4
9
1
5
6
7
6
7
6
/
/
T
l
UN
C
_
UN
_
0
0
2
4
9
P
D
.
F
B
sì
G
tu
e
S
T
T
o
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3
12345678910indices of sorted text groups0.580.600.620.640.660.680.70accMultichannelCNNATTCONV0.6+diff of two curves
#instances
23,596
1,304
2,126
27,026
#entail
8,602
657
842
10,101
#neutro
14,994
647
1,284
16,925
train
dev
test
total
Tavolo 4: Statistics of SCITAIL data set.
o
/
w
N
o
io
T
N
e
T
T
UN
H
T
io
w
N
o
io
T
N
e
T
T
UN
acc
systems
60.4
Majority Class
65.1
w/o Context
69.5
Bi-LSTM
70.6
NGram model
Bi-CNN
74.4
Enhanced LSTM 70.6
Attentive-LSTM 71.5
72.3
Decomp-Att
77.3
DGEM
75.2
APCNN
75.8
ABCNN
78.1
75.1
79.2
ATTCONV-light
w/o convolution
ATTCONV-advanced
Tavolo 5: ATTCONV vs. baselines on SCITAIL.
This is likely because of ATTCONV’s capability to
encode broader context in its filters.
4.3 Sentence Modeling with a Single Context:
Textual Entailment
Data Set. SCITAIL (Khot et al., 2018) is a textual
entailment benchmark designed specifically for a
real-world task: multi-choice question answering.
All hypotheses tx were obtained by rephrasing
(question, correct answer) pairs into single sen-
tences, and premises ty are relevant Web sentences
retrieved by an information retrieval method. Then
the task is to determine whether a hypothesis is
true or not, given a premise as context. Tutto (tx, ty)
pairs are annotated via crowdsourcing. Precisione
is reported. Tavolo 1 shows examples and Table 4
gives statistics.
By this construction, a substantial performance
improvement on SCITAIL is equivalent to a better
QA performance (Khot et al., 2018). The hypoth-
esis tx is the target sentence, and the premise ty
acts as its context.
Baselines. Apart from the common baselines
(see Section 4.1), we include systems covered
by Khot et al. (2018): (io) n-gram Overlap: An
overlap baseline, considering lexical granularity
695
such as unigrams, one-skip bigrams, and one-
skip trigrams. (ii) Decomposable Attention Model
(Decomp-Att) (Parikh et al., 2016): Explore atten-
tion mechanisms to decompose the task into sub-
(iii) Enhanced LSTM
tasks to solve in parallel.
(Chen et al., 2017B): Enhance LSTM by taking
into account syntax and semantics from parsing
informazione.
(iv) DGEM (Khot et al., 2018): UN
decomposed graph entailment model, the current
state-of-the-art.
Tavolo 5 presents results on SCITAIL. (io) Within
ATTCONV, “advanced” beats “light” by 1.1%;
(ii) “w/o convolution” and attentive pooling (cioè.,
ABCNN & APCNN) get lower performances by
3%–4%; (iii) More complicated attention mech-
anisms equipped into LSTM (per esempio., “attentive-
LSTM” and “enhanced-LSTM”) perform even
worse.
Error Analysis.
To better understand the
ATTCONV in SCITAIL, we study some error
cases listed in Table 6.
Language conventions. Pair #1 uses sequen-
tial commas (cioè., in “the egg, larva, pupa, E
adult”) or a special symbol sequence (cioè., in “egg
−> larva −> pupa −> adult”) to form a set or
sequence; pair #2 has “A (or B)” to express the
equivalence of A and B. This challenge is expected
to be handled by DNNs with specific training signals.
In #3, “be-
cause smaller amounts of water evaporate in the
cool morning” cannot be inferred from the premise
ty directly. The main challenge in #4 is to dis-
tinguish “weight” from “force,” which requires
background physical knowledge that is beyond the
presented text here and beyond the expressivity of
incorporamenti di parole.
Knowledge beyond the text ty.
Complex discourse relation. The premise in #5
has an “or” structure. In #6, the inserted phrase
“with about 16,000 species” makes the connection
between “nonvascular plants” and “the mosses,
liverworts, and hornworts” hard to detect. Both
instances require the model to decode the dis-
course relation.
ATTCONV on SNLI. Tavolo 7 shows the com-
parison. Lo osserviamo: (io) classifying hypothe-
ses without looking at premises,
questo è, “w/o
context” baseline, results in a large improvement
over the “majority baseline.” This verifies the
strong bias in the hypothesis construction of the
SNLI data set (Gururangan et al., 2018; Poliak
et al., 2018). (ii) ATTCONV (advanced) surpasses
l
D
o
w
N
o
UN
D
e
D
F
R
o
M
H
T
T
P
:
/
/
D
io
R
e
C
T
.
M
io
T
.
e
D
tu
/
T
UN
C
l
/
l
UN
R
T
io
C
e
–
P
D
F
/
D
o
io
/
.
1
0
1
1
6
2
/
T
l
UN
C
_
UN
_
0
0
2
4
9
1
5
6
7
6
7
6
/
/
T
l
UN
C
_
UN
_
0
0
2
4
9
P
D
.
F
B
sì
G
tu
e
S
T
T
o
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3
#
1
2
3
4
5
6
(Premise ty, Hypothesis tx) Pair
(ty) These insects have 4 life stages, the egg, larva, pupa, and adult.
(tx) The sequence egg −> larva −> pupa −> adult shows the life cycle
of some insects.
(ty) . . . the notochord forms the backbone (or vertebral column).
(tx) Backbone is another name for the vertebral column.
(ty) Water lawns early in the morning . . . prevent evaporation.
(tx) Watering plants and grass in the early morning is a way to conserve water
because smaller amounts of water evaporate in the cool morning.
(ty) . . . the SI unit . . . for force is the Newton (N) and is defined as (kg·m/s−2 ).
(tx) Newton (N) is the SI unit for weight.
(ty) Heterotrophs get energy and carbon from living plants or animals
(consumers) or from dead organic matter (decomposers).
(tx) Mushrooms get their energy from decomposing dead organisms.
(ty) . . . are a diverse assemblage of three phyla of nonvascular plants, con
Di 16,000 species, that includes the mosses, liverworts, and hornworts.
(tx) Moss is best classified as a nonvascular plant.
G/P Challenge
1/0
1/0
lingua
conventions
lingua
conventions
1/0
beyond text
0/1
beyond text
0/1
1/0
discourse
relation
discourse
relation
Tavolo 6: Error cases of ATTCONV in SCITAIL. “. . . ": truncated text. “G/P”: gold/predicted label.
o
/
w
N
o
io
T
N
e
T
T
UN
H
T
io
w
N
o
io
T
N
e
T
T
UN
0
Sistemi
#para acc
34.3
majority class
w/o context (cioè., hypothesis only) 270K 68.7
220K 77.6
Bi-LSTM (Bowman et al., 2015)
270K 80.3
Bi-CNN
3.5M 82.1
Tree-CNN (Mou et al., 2016)
6.3M 84.8
NES (Munkhdalai and Yu, 2017)
250K 83.5
Attentive-LSTM (Rocktäschel)
95M 84.4
Self-Attentive (Lin et al., 2017)
1.9M 86.1
Match-LSTM (Wang and Jiang)
3.4M 86.3
LSTMN (Cheng et al., 2016)
580K 86.8
Decomp-Att (Parikh)
7.7M 88.6
Enhanced LSTM (Chen et al., 2017B)
ABCNN (Yin et al., 2016)
834K 83.7
APCNN (dos Santos et al., 2016) 360K 83.9
360K 86.3
360K 84.9
900K 87.8
8M 88.7
ATTCONV – light
w/o convolution
ATTCONV – advanced
State-of-the-art (Peters et al., 2018)
volution (Figura 6(UN)) and attentive pooling
(Figura 6(B)).
(after
(io) NO,j
in sentence tx; (ii) hx
softmax), which shows
Figura 6(UN) explores the visualization of two
kinds of features learned by light ATTCONV in
SNLI data set (most are short sentences with
in Equa-
rich phrase-level reasoning):
zione (1)
IL
attention distribution over context ty by the hidden
state hx
io,new in Equation (5)
io
for i = 1, 2, · · · , |tx|;
it shows the context-
aware word features in tx. By the two visual-
ized features, we can identify which parts of the
context ty are more important for a word in sen-
tence tx, and a max-pooling, over those context-
driven word representations, selects and forwards
dominant (word, leftcontext, rightcontext, attcontext)
combinations to the final decision maker.
Tavolo 7: Performance comparison on SNLI test. In-
semble systems are not included.
all “w/o attention” baselines and “with attention”
CNN baselines (cioè., attentive pooling), obtaining
a performance (87.8%) that is close to the state of
the art (88.7%).
We also report
the parameter size in SNLI
as most baseline systems did. Tavolo 7 shows
in comparison to these baselines, our
Quello,
ATTCONV (light and advanced) has a more lim-
ited number of parameters, yet its performance is
competitivo.
Visualization.
In Figure 6, we visualize the
attention mechanisms explored in attentive con-
Figura 6(UN) shows the features3 of sentence tx
= “A dog jumping for a Frisbee in the snow” con-
ditioned on the context ty = “An animal is out-
side in the cold weather, playing with a plastic
toy.” Observations:
(io) The right figure shows
that the attention mechanism successfully aligns
some cross-sentence phrases that are informative
to the textual entailment problem, such as “dog”
to “animal” (cioè., cx
dog ≈ “animal”), “Frisbee”
to “plastic toy” and “playing” (cioè., cx
F risbee ≈
“plastic toy”+“playing”); (ii) The left figure shows
a max-pooling over the generated features of
filter_1 and filter_2 will focus on the context-
aware phrases (UN, dog, jumping, cx
dog) E (UN,
3For simplicity, we show 2 out of 300 ATTCONV filters.
696
l
D
o
w
N
o
UN
D
e
D
F
R
o
M
H
T
T
P
:
/
/
D
io
R
e
C
T
.
M
io
T
.
e
D
tu
/
T
UN
C
l
/
l
UN
R
T
io
C
e
–
P
D
F
/
D
o
io
/
.
1
0
1
1
6
2
/
T
l
UN
C
_
UN
_
0
0
2
4
9
1
5
6
7
6
7
6
/
/
T
l
UN
C
_
UN
_
0
0
2
4
9
P
D
.
F
B
sì
G
tu
e
S
T
T
o
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3
(UN) Visualization for features generated by ATTCONV’s filters on sentence tx and ty. A max-pooling, over filter_1, locates
the phrase (UN, dog, jumping, cx
dog” (resp. cx
F ris.)—the
attentive context of “dog” (resp. “Frisbee”) in tx—mainly comes from “animal” (resp. “toy” and “playing”) in ty.
dog), and locates the phrase (UN, Frisbee, In, cx
F risbee) via filter_2. “cx
l
D
o
w
N
o
UN
D
e
D
F
R
o
M
H
T
T
P
:
/
/
D
io
R
e
C
T
.
M
io
T
.
e
D
tu
/
T
UN
C
l
/
l
UN
R
T
io
C
e
–
P
D
F
/
D
o
io
/
.
1
0
1
1
6
2
/
T
l
UN
C
_
UN
_
0
0
2
4
9
1
5
6
7
6
7
6
/
/
T
l
UN
C
_
UN
_
0
0
2
4
9
P
D
.
F
B
sì
G
tu
e
S
T
T
o
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3
(B) Attention visualization for attentive pooling (ABCNN). Based on the words in tx and ty, first, a convolution layer with
filter width 3 outputs hidden states for each sentence, then each hidden state will obtain an attention weight for how well this
hidden state matches towards all the hidden states in the other sentence, and finally all hidden states in each sentence will be
weighted and summed up as the sentence representation. This visualization shows that the spans “dog jumping for” and “in
the snow” in tx and the spans “animal is outside” and “in the cold” in ty are most indicative to the entailment reasoning.
Figura 6: Attention visualization for attentive convolution (top) and attentive pooling (bottom) between sentence
tx = “A dog jumping for a Frisbee in the snow” (left) and sentence ty = “An animal is outside in the cold weather,
playing with a plastic toy” (right).
Frisbee, In, cx
F risbee) rispettivamente; the two phrases
are crucial to the entailment reasoning for this (ty,
tx) pair.
Figura 6(B) shows the phrase-level (cioè., each
consecutive trigram) attentions after the convolu-
tion operation. As Figure 3 shows, a subsequent
pooling step will weight and sum up those phrase-
level hidden states as an overall sentence represen-
tazione. So, even though some phrases such as “in
the snow” in tx and “in the cold” in ty show im-
portance in this pair instance, the final sentence
representation still (io) lacks a fine-grained phrase-
to-phrase reasoning, E (ii) underestimates some
indicative phrases such as “A dog” in tx and “An
animal” in ty.
Briefly, attentive convolution first performs
phrase-to-phrase,
Poi
composes features; attentive pooling composes
inter-sentence reasoning,
697
Adogforainthe.snowjumpingFrisbeecxdogcxFris.Anincold,isthewithtoy.aanimaloutsideweatherplayingplastictxtyAdogforainthe.snowjumpingFrisbeeAnincold,isthewithtoy.aanimaloutsideweatherplayingplastictxtyconvolutionoutput (filter width=3)
#SUPPORTED #REFUTED
29,775
3,333
3,333
80,035
3,333
3,333
#NEI
35,639
3,333
3,333
train
dev
test
Tavolo 8: Statistics of claims in the FEVER data set.
phrase features as sentence representations, Poi
Intuitively, attentive convo-
performs reasoning.
lution better fits the way humans conduct entail-
ment reasoning, and our experiments validate its
superiority—it is the hidden states of the aligned
phrases rather than their matching scores that support
better representation learning and decision-making.
The comparisons in both SCITAIL and SNLI
show that:
• CNNs with attentive
convolution (cioè.,
ATTCONV) outperform the CNNs with at-
tentive pooling (cioè., ABCNN and APCNN);
• Some competitors got over-tuned on SNLI
while demonstrating mediocre performance
in SCITAIL—a real-world NLP task. Our sys-
tem ATTCONV shows its robustness in both
benchmark data sets.
4.4 Sentence Modeling with Multiple Contexts:
Claim Verification
Data Set. For this task, we use FEVER (Thorne
et al., 2018); it infers the truthfulness of claims by
extracted evidence. The claims in FEVER were
manually constructed from the introductory sec-
tions of about 50K popular Wikipedia articles in
the June 2017 dump. Claims have 9.4 tokens on
average. Tavolo 8 lists the claim statistics.
In addition to claims, FEVER also provides a
Wikipedia corpus of approximately 5.4 million ar-
ticles, from which gold evidences are gathered and
provided. Figura 7 shows the distributions of sen-
tence sizes in FEVER’s ground truth evidence set
(cioè., the context size in our experimental set-up).
We can see that roughly 28% of evidence instances
cover more than one sentence and roughly 16%
cover more than two sentences.
Each claim is labeled as SUPPORTED, RE-
FUTED, or NOTENOUGHINFO (NEI) given the
gold evidence. The standard FEVER task also
explores the performance of evidence extraction,
evaluated by F1 between extracted evidence and
gold evidence. This work focuses on the claim en-
tailment part, assuming the evidences are provided
(extracted or gold). More specifically, we treat a
claim as tx, and its evidence sentences as context ty.
Figura 7: Distribution of #sentence in FEVER evi-
dence.
This task has two evaluations:
(io) ALL—
accuracy of claim verification regardless of the
validness of evidence; (ii) SUBSET—verification
accuracy of a subset of claims, in which the gold
evidence for SUPPORTED and REFUTED claims
must be fully retrieved. We use the official eval-
uation toolkit.4
Set-ups.
(io) We adopt the same retrieved evi-
dence set (i.e, contexts ty) as Thorne et al. (2018):
top-5 most relevant sentences from top-5 retrieved
wiki pages by a document retriever (Chen et al.,
2017UN). The quality of this evidence set against the
ground truth is: 44.22 (recall), 10.44 (precision),
16.89 (F1) on dev, E 45.89 (recall), 10.79 (pre-
cision), 17.47 (F1) on test. This set-up challenges
our system with potentially unrelated or even mis-
leading context. (ii) We use the ground truth evi-
dence as context. This lets us determine how far
our ATTCONV can go for this claim verification
problem once the accurate evidence is given.
Baselines. We first include the two systems ex-
plored by Thorne et al. (2018): (io) MLP: A multi-
layer perceptron baseline with a single hidden
layer, based on tf-idf cosine similarity between the
claim and the evidence (Riedel et al., 2017); (ii)
Decomp-Att (Parikh et al., 2016): A decompos-
able attention model that is tested in SCITAIL and
SNLI before. Note that both baselines first relied
on an information retrieval system to extract the
top-5 relevant sentences from the retrieved top-5
wiki pages as evidence for claims, then concate-
nated all evidence sentences as a longer context
for a claim.
4https://github.com/sheffieldnlp/fever-
scorer.
698
l
D
o
w
N
o
UN
D
e
D
F
R
o
M
H
T
T
P
:
/
/
D
io
R
e
C
T
.
M
io
T
.
e
D
tu
/
T
UN
C
l
/
l
UN
R
T
io
C
e
–
P
D
F
/
D
o
io
/
.
1
0
1
1
6
2
/
T
l
UN
C
_
UN
_
0
0
2
4
9
1
5
6
7
6
7
6
/
/
T
l
UN
C
_
UN
_
0
0
2
4
9
P
D
.
F
B
sì
G
tu
e
S
T
T
o
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3
12345678910>10#context for each claim sentence051015…%12.134.072.852.901.760.980.680.490.401.8571.88
system
MLP
Bi-CNN
APCNN
ABCNN
Attentive-LSTM
Decomp-Att
ATTCONV
v
e
D
retrie. evi.
gold
evi.
ALL SUB
41.86 19.04 65.13
47.82 26.99 75.02
50.75 30.24 78.91
51.39 32.44 77.13
52.47 33.19 78.44
52.09 32.57 80.82
leggero,context-wise
w/o conv.
leggero,context-conc
w/o conv.
57.78 34.29 83.20
47.29 25.94 73.18
59.31 37.75 84.74
48.02 26.67 73.44
advan.,context-wise 60.20 37.94 84.99
advan.,context-conc 62.26 39.44 86.02
T (Thorne et al., 2018)
S
e
T
ATTCONV
50.91 31.87
61.03 38.77 84.61
–
Tavolo 9: Performance on dev and test of FEVER. In
“gold evi.” scenario, ALL SUBSET are the same.
Noi
then consider
two variants of our
tx
ATTCONV in dealing with modeling of
with variable-size context ty. (io) Context-wise:
we first use all evidence sentences one by one as
context ty to guide the representation learning of
the claim tx, generating a group of context-aware
representation vectors for the claim,
then we
do element-wise max-pooling over this vector
group as the final representation of the claim. (ii)
Context-conc: concatenate all evidence sentences
as a single piece of context,
IL
claim based on this context. This is the same
preprocessing step as Thorne et al. (2018) did.
then model
Results. Tavolo 9 compares our ATTCONV in dif-
ferent set-ups against the baselines. Primo, ATTCONV
surpasses
the top competitor “Decomp-Att,"
reported in Thorne et al. (2018), with big mar-
gins in dev (ALL: 62.26 vs.
52.09) and test
(ALL: 61.03 vs. 50.91). Inoltre, “advanced-
ATTCONV” consistently outperforms its “light”
counterpart. Inoltre, ATTCONV surpasses at-
tentive pooling (cioè., ABCNN & APCNN) E
“attentive-LSTM” by >10% in ALL, >6% in SUB
and >8% in “gold evi.”
Figura 8 further explores the fine-grained per-
formance of ATTCONV for different sizes of gold
evidence (cioè., different sizes of context ty). IL
system shows comparable performances for sizes
1 E 2. Even for context sizes larger than 5, Esso
only drops by 5%.
699
l
D
o
w
N
o
UN
D
e
D
F
R
o
M
H
T
T
P
:
/
/
D
io
R
e
C
T
.
M
io
T
.
e
D
tu
/
T
UN
C
l
/
l
UN
R
T
io
C
e
–
P
D
F
/
D
o
io
/
.
1
0
1
1
6
2
/
T
l
UN
C
_
UN
_
0
0
2
4
9
1
5
6
7
6
7
6
/
/
T
l
UN
C
_
UN
_
0
0
2
4
9
P
D
.
F
B
sì
G
tu
e
S
T
T
o
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3
Figura 8: Fine-grained ATTCONV performance given
variable-size golden FEVER evidence as claim’s con-
testo.
These experiments on claim verification clearly
show the effectiveness of ATTCONV in sen-
tence modeling with variable-size context. Questo
should be attributed to the attention mechanism in
ATTCONV, which enables a word or a phrase in
the claim tx to “see” and accumulate all related
clues even if those clues are scattered across mul-
tiple contexts ty.
Error Analysis. We do error analysis for “re-
trieved evidence” scenario.
Error case #1 is due to the failure of fully re-
trieving all evidence. Per esempio, a successful
support of the claim “Weekly Idol has a host born
in the year 1978” requires the information compo-
sition from three evidence sentences, two from the
wiki article “Weekly Idol,” and one from “Jeong
Hyeong-don.” However, only one of them is
retrieved in the top-5 candidates. Our system pre-
dicts REFUTED. This error is more common in
instances for which no evidence is retrieved.
Error case #2 is due to the insufficiency of rep-
resentation learning. Consider the wrong claim
in REFUTED
“Corsica belongs to Italy” (cioè.,
class). Even though good evidence is retrieved, IL
system is misled by noise evidence: “It is located
. . . west of the Italian Peninsula, with the nearest
land mass being the Italian island . . . ".
Error case #3 is due to the lack of advanced data
preprocessing. For a human, it is very easy to “re-
fute” the claim “Telemundo is an English-language
television network” by the evidence “Telemundo
is an American Spanish-language terrestrial tele-
vision . . . " (from the “Telemundo” wikipage), by
checking the keyphrases: “Spanish-language” vs.
“English-language.” Unfortunately, both tokens
are unknown words in our system; as a result,
12345>5gold #context for each claim81.582.082.583.083.584.084.585.0acc (%)
they do not have informative embeddings. A more
careful data preprocessing is expected to help.
5 Summary
We presented ATTCONV, the first work that en-
ables CNNs to acquire the attention mechanism
commonly used in RNNs. ATTCONV combines
the strengths of CNNs with the strengths of the
RNN attention mechanism. On the one hand,
it makes broad and rich context available for
prediction, either context from external
inputs
(extra-context) or internal inputs (intra-context).
D'altra parte, it can take full advantage of
the strengths of convolution:
It is more order-
sensitive than attention in RNNs and local-context
information can be powerfully and efficiently
modeled through convolution filters. Our experi-
ments demonstrate the effectiveness and flexibil-
ity of ATTCONV when modeling sentences with
variable-size context.
Ringraziamenti
Questo
We gratefully acknowledge funding for
work by the European Research Council (ERC
#740516). We would like to thank the anonymous
reviewers for their helpful comments.
Riferimenti
Heike Adel and Hinrich Schütze. 2017. Exploring
different dimensions of attention for uncertainty
detection. In Proceedings of EACL, pages 22–34,
Valencia, Spain.
Dzmitry Bahdanau, Kyunghyun Cho, e Yoshua
Bengio. 2015. Traduzione automatica neurale di
imparare insieme ad allineare e tradurre. Nel professionista-
ceedings of ICLR, San Diego, USA.
Samuel R. Bowman, Gabor Angeli, Christopher
Potts, e Christopher D. Equipaggio. 2015. UN
large annotated corpus for learning natural lan-
guage inference. In Proceedings of EMNLP,
pages 632–642, Lisbon, Portugal.
Danqi Chen, Adam Fisch, Jason Weston, E
Antoine Bordes. 2017UN. Reading Wikipedia to
answer open-domain questions. Negli Atti
of ACL, pages 1870–1879, Vancouver, Canada.
Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei,
Hui Jiang, and Diana Inkpen. 2017B. Enhanced
LSTM for natural language inference. Nel professionista-
ceedings of ACL, pages 1657–1668, Vancouver,
Canada.
Jianpeng Cheng, Li Dong, and Mirella Lapata.
2016. Long short-term memory-networks for
In Proceedings of EMNLP,
machine reading.
pages 551–561, Austin, USA.
Ronan Collobert, Jason Weston, Léon Bottou,
Michael Karlen, Koray Kavukcuoglu, and Pavel
P. Kuksa. 2011. Natural language processing
(almost) from scratch. Journal of Machine
Learning Research, 12:2493–2537.
Ido Dagan, Dan Roth, Mark Sammons, E
Fabio Massimo Zanzotto. 2013. Recognizing
Textual Entailment: Models and Applications.
Synthesis Lectures on Human Language Tech-
nologies. Morgan & Claypool.
John Duchi, Elad Hazan, and Yoram Singer. 2011.
Adaptive subgradient methods for online learn-
ing and stochastic optimization. Journal of Ma-
chine Learning Research, 12:2121–2159.
Jeffrey L. Elman. 1990. Finding structure in time.
Cognitive Science, 14(2):179–211.
Jonas Gehring, Michael Auli, David Grangier,
Denis Yarats, and Yann N. Dauphin. 2017.
Convolutional sequence to sequence learning.
In Proceedings of ICML, pages 1243–1252,
Sydney, Australia.
Alex Graves. 2013. Generating sequences with re-
current neural networks. CoRR, abs/1308.0850.
Alex Graves, Greg Wayne, and Ivo Danihelka.
2014. Neural turing machines. CoRR, abs/1410.5401.
Suchin Gururangan, Swabha Swayamdipta, Omer
Levy, Roy Schwartz, Samuel R. Bowman, E
Noah A. Smith. 2018. Annotation artifacts in
natural language inference data. Negli Atti
of NAACL-HLT, pages 107–112, New Orleans,
USA.
Karl Moritz Hermann, Tomás Kociský, Edoardo
Grefenstette, Lasse Espeholt, Will Kay, Mustafa
Suleyman, and Phil Blunsom. 2015. Teach-
ing machines to read and comprehend. Nel professionista-
ceedings of NIPS, pages 1693–1701, Montreal,
Canada.
700
l
D
o
w
N
o
UN
D
e
D
F
R
o
M
H
T
T
P
:
/
/
D
io
R
e
C
T
.
M
io
T
.
e
D
tu
/
T
UN
C
l
/
l
UN
R
T
io
C
e
–
P
D
F
/
D
o
io
/
.
1
0
1
1
6
2
/
T
l
UN
C
_
UN
_
0
0
2
4
9
1
5
6
7
6
7
6
/
/
T
l
UN
C
_
UN
_
0
0
2
4
9
P
D
.
F
B
sì
G
tu
e
S
T
T
o
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3
Nal Kalchbrenner, Edward Grefenstette, and Phil
Blunsom. 2014. A convolutional neural net-
work for modelling sentences. Negli Atti
of ACL, pages 655–665, Baltimore, USA.
Tushar Khot, Ashish Sabharwal, and Peter Clark.
2018. SciTaiL: A textual entailment dataset
from science question answering. In Procedi-
ings of AAAI, pages 5189–5197, New Orleans,
USA.
Yoon Kim. 2014. Reti neurali convoluzionali
for sentence classification. Negli Atti di
EMNLP, pagine 1746–1751, Doha, Qatar.
Yoon Kim, Carl Denton, Luong Hoang, E
Alexander M. Rush. 2017. Structured atten-
tion networks. In Proceedings of ICLR, Toulon,
France.
Ankit Kumar, Ozan Irsoy, Peter Ondruska,
Mohit Iyyer, James Bradbury, Ishaan Gulrajani,
Victor Zhong, Romain Paulus, and Richard
Socher. 2016. Ask me anything: Dynamic
memory networks for natural language process-
ing. In Proceedings of ICML, pages 1378–1387,
New York City, USA.
Quoc Le and Tomas Mikolov. 2014. Distributed
representations of sentences and documents.
In Proceedings of ICML, pages 1188–1196,
Beijing, China.
Yann LeCun, Léon Bottou, Yoshua Bengio, E
Patrick Haffner. 1998. Gradient-based learning
applied to document recognition. Proceedings
of the IEEE, 86(11):2278–2324.
Jiwei Li, Minh-Thang Luong, and Dan Jurafsky.
2015. A hierarchical neural autoencoder for
paragraphs and documents. Negli Atti di
ACL, pages 1106–1115, Beijing, China.
Jindrich Libovický and Jindrich Helcl. 2017. A-
tention strategies for multi-source sequence-
to-sequence learning. In Proceedings of ACL,
pages 196–202, Vancouver, Canada.
Zhouhan Lin, Minwei Feng, Cícero Nogueira dos
Santos, Mo Yu, Bing Xiang, Bowen Zhou,
and Yoshua Bengio. 2017. A structured self-
attentive sentence embedding. Negli Atti
of ICLR, Toulon, France.
Minh-Thang Luong, Ciao Pham, and Christopher
D. Equipaggio. 2015. Effective approaches to
traduzione automatica neurale basata sull’attenzione. In
Proceedings of EMNLP, pagine 1412–1421,
Lisbon, Portugal.
Yishu Miao, Lei Yu, and Phil Blunsom. 2016.
Neural variational inference for text processing.
In Proceedings of ICML, pages 1727–1736,
New York City, USA.
Tomás Mikolov, Ilya Sutskever, Kai Chen, Gregory
S. Corrado, and Jeffrey Dean. 2013. Dis-
tributed representations of words and phrases
and their compositionality. Negli Atti di
NIPS, pages 3111–3119, Lake Tahoe, USA.
Lili Mou, Rui Men, Ge Li, Yan Xu, Lu Zhang, Rui
Yan, and Zhi Jin. 2016. Natural language in-
ference by tree-based convolution and heuristic
matching. In Proceedings of ACL, pages 130–136,
Berlin, Germany.
Tsendsuren Munkhdalai and Hong Yu. 2017.
Negli Atti di
Neural semantic encoders.
EACL, pages 397–407, Valencia, Spain.
Ramesh Nallapati, Bowen Zhou, Cícero Nogueira
dos Santos, Çaglar Gülçehre, and Bing Xiang.
2016. Abstractive text summarization using
sequence-to-sequence rnns and beyond. Nel professionista-
ceedings of CoNLL, pages 280–290, Berlin,
Germany.
Ankur P. Parikh, Oscar Täckström, Dipanjan Das,
and Jakob Uszkoreit. 2016. A decomposable
attention model for natural language inference.
In Proceedings of EMNLP, pages 2249–2255,
Austin, USA.
Jeffrey Pennington, Riccardo Socher, and Christopher
D. Equipaggio. 2014. Guanto: Global vectors for
word representation. In Proceedings of EMNLP,
pagine 1532–1543, Doha, Qatar.
Matthew E. Peters, Marco Neumann, Mohit Iyyer,
Matt Gardner, Cristoforo Clark, Kenton Lee,
e Luke Zettlemoyer. 2018. Deep contextu-
alized word representations. Negli Atti di
NAACL-HLT, pagine 2227–2237, New Orleans,
USA.
Adam Poliak, Jason Naradowsky, Aparajita Haldar,
Rachel Rudinger, and Benjamin Van Durme.
2018. Hypothesis only baselines in natural
701
l
D
o
w
N
o
UN
D
e
D
F
R
o
M
H
T
T
P
:
/
/
D
io
R
e
C
T
.
M
io
T
.
e
D
tu
/
T
UN
C
l
/
l
UN
R
T
io
C
e
–
P
D
F
/
D
o
io
/
.
1
0
1
1
6
2
/
T
l
UN
C
_
UN
_
0
0
2
4
9
1
5
6
7
6
7
6
/
/
T
l
UN
C
_
UN
_
0
0
2
4
9
P
D
.
F
B
sì
G
tu
e
S
T
T
o
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3
language inference. In Proceedings of *SEM,
pages 180–191, New Orleans, USA.
Benjamin Riedel, Isabelle Augenstein, Georgios P.
Spithourakis, and Sebastian Riedel. 2017. UN
simple but tough-to-beat baseline for the fake
news challenge stance detection task. CoRR,
abs/1707.03264.
Tim Rocktäschel, Edward Grefenstette, Karl
Moritz Hermann, Tomáš Koˇcisk`y, and Phil
Blunsom. 2016. Reasoning about entailment
with neural attention. In Proceedings of ICLR,
San Juan, Puerto Rico.
Cícero Nogueira dos Santos, Ming Tan, Bing
Xiang, and Bowen Zhou. 2016. Attentive pool-
ing networks. CoRR, abs/1602.03609.
Min Joon Seo, Aniruddha Kembhavi, Ali Farhadi,
and Hannaneh Hajishirzi. 2017. Bidirectional
attention flow for machine comprehension. In
Proceedings of ICLR, Toulon, France.
Lifeng Shang, Zhengdong Lu, and Hang Li.
2015. Neural responding machine for short-
In Proceedings of ACL,
text conversation.
pages 1577–1586, Beijing, China.
Rupesh Kumar Srivastava, Klaus Greff, E
Jürgen Schmidhuber. 2015. Training very
In Proceedings of NIPS,
deep networks.
pages 2377–2385, Montreal, Canada.
James Thorne, Andreas Vlachos, Christos
Christodoulopoulos, and Arpit Mittal. 2018.
FEVER: A large-scale dataset for fact extraction
and verification. In Proceedings of NAACL-
HLT, pages 809–819, New Orleans, USA.
Ashish Vaswani, Noam Shazeer, Niki Parmar,
Jakob Uszkoreit, Llion Jones, Aidan N. Gomez,
Lukasz Kaiser, and Illia Polosukhin. 2017. A-
tention is all you need. In Proceedings of NIPS,
pages 6000–6010, Long Beach, USA.
Shuohang Wang and Jing Jiang. 2016. Learn-
ing natural language inference with LSTM. In
Proceedings of NAACL-HLT, pages 1442–1451,
San Diego, USA.
Shuohang Wang and Jing Jiang. 2017. Machine
comprehension using match-LSTM and an-
swer pointer. In Proceedings of ICLR, Toulon,
France.
Wenhui Wang, Nan Yang, Furu Wei, Baobao
Chang, and Ming Zhou. 2017UN. Gated self-
matching networks for reading comprehension
and question answering. In Proceedings of ACL,
pages 189–198, Vancouver, Canada.
Zhiguo Wang, Wael Hamza, and Radu Florian.
2017B. Bilateral multi-perspective matching for
natural language sentences. Negli Atti di
IJCAI, pages 4144–4150, Melbourne, Australia.
Caiming Xiong, Stephen Merity, and Richard
Socher. 2016. Dynamic memory networks for
visual and textual question answering. Nel professionista-
ceedings of ICML, pages 2397–2406, New York
Città, USA.
Caiming Xiong, Victor Zhong, and Richard
Socher. 2017. Dynamic coattention networks for
question answering. In Proceedings of ICLR,
Toulon, France.
Wenpeng Yin, Hinrich Schütze, Bing Xiang, E
Bowen Zhou. 2016. ABCNN: Attention-based
convolutional neural network for modeling sen-
tence pairs. TACL, 4:259–272.
702
l
D
o
w
N
o
UN
D
e
D
F
R
o
M
H
T
T
P
:
/
/
D
io
R
e
C
T
.
M
io
T
.
e
D
tu
/
T
UN
C
l
/
l
UN
R
T
io
C
e
–
P
D
F
/
D
o
io
/
.
1
0
1
1
6
2
/
T
l
UN
C
_
UN
_
0
0
2
4
9
1
5
6
7
6
7
6
/
/
T
l
UN
C
_
UN
_
0
0
2
4
9
P
D
.
F
B
sì
G
tu
e
S
T
T
o
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3
Scarica il pdf