Attentive Convolution:
Equipping CNNs with RNN-style Attention Mechanisms
Wenpeng Yin
Department of Computer and Information
Wissenschaft, University of Pennsylvania
wenpeng@seas.upenn.edu
Hinrich Schütze
Center for Information and Language
Processing, LMU Munich, Deutschland
inquiries@cislmu.org
Abstrakt
In NLP, convolutional neural networks
(CNNs) have benefited less than recur-
rent neural networks (RNNs) from attention
mechanisms. We hypothesize that this is be-
cause the attention in CNNs has been mainly
implemented as attentive pooling (d.h., es ist
applied to pooling) rather than as attentive
convolution (d.h., it is integrated into con-
Evolution). Convolution is the differentiator
of CNNs in that it can powerfully model
the higher-level representation of a word by
taking into account its local fixed-size con-
text in the input text tx.
In this work, Wir
propose an attentive convolution network,
ATTCONV. It extends the context scope of
the convolution operation, deriving higher-
level features for a word not only from
local context, but also from information ex-
tracted from nonlocal context by the atten-
tion mechanism commonly used in RNNs.
This nonlocal context can come (ich) aus
parts of the input text tx that are distant
oder (ii) from extra (d.h., extern) contexts
ty. Experiments on sentence modeling with
zero-context (sentiment analysis), single-
Kontext (textual entailment) and multiple-
Kontext (claim verification) demonstrate the
effectiveness of ATTCONV in sentence rep-
resentation learning with the incorporation
of context. Insbesondere, attentive convo-
lution outperforms attentive pooling and
is a strong competitor to popular attentive
RNNs.1
1
Einführung
Natural language processing (NLP) has benefited
greatly from the resurgence of deep neural net-
funktioniert (DNNs), thanks to their high performance
with less need of engineered features. A DNN typ-
ically is composed of a stack of non-linear trans-
1https://github.com/yinwenpeng/Attentive_
Convolution.
687
formation layers, each generating a hidden rep-
resentation for the input by projecting the output
of a preceding layer into a new space. To date,
building a single and static representation to ex-
press an input across diverse problems is far from
satisfactory. Stattdessen, it is preferable that the rep-
resentation of the input vary in different applica-
tion scenarios. In Beantwortung, attention mechanisms
(Graves, 2013; Graves et al., 2014) have been pro-
posed to dynamically focus on parts of the in-
put that are expected to be more specific to the
Problem. They are mostly implemented based on
fine-grained alignments between two pieces of ob-
Projekte, each emitting a dynamic soft-selection to the
components of the other, so that the selected ele-
ments dominate in the output hidden representa-
tion. Attention-based DNNs have demonstrated good
performance on many tasks.
Convolutional neural networks (CNNs; LeCun
et al., 1998) and recurrent neural networks (RNNs;
Elman, 1990) are two important types of DNNs.
Most work on attention has been done for RNNs.
Attention-based RNNs typically take three types
of inputs to make a decision at the current step:
(ich) the current input state, (ii) a representation of
local context (computed unidirectionally or bidi-
rectionally; Rocktäschel et al.
[2016]), Und (iii)
the attention-weighted sum of hidden states cor-
responding to nonlocal context (z.B., the hidden
states of the encoder in neural machine translation;
Bahdanau et al. [2015]). An important question,
daher, is whether CNNs can benefit from such
an attention mechanism as well, and how. Das ist
our technical motivation.
Our second motivation is natural language un-
derstanding. In generic sentence modeling without
extra context (Collobert et al., 2011; Kalchbrenner
et al., 2014; Kim, 2014), CNNs learn sentence rep-
resentations by composing word representations
that are conditioned on a local context window.
We believe that attentive convolution is needed
Transactions of the Association for Computational Linguistics, Bd. 6, S. 687–702, 2018. Action Editor: Slav Petrov.
Submission batch: 6/2018; Revision batch: 10/2018; Published 12/2018.
C(cid:13) 2018 Verein für Computerlinguistik. Distributed under a CC-BY 4.0 Lizenz.
l
D
Ö
w
N
Ö
A
D
e
D
F
R
Ö
M
H
T
T
P
:
/
/
D
ich
R
e
C
T
.
M
ich
T
.
e
D
u
/
T
A
C
l
/
l
A
R
T
ich
C
e
–
P
D
F
/
D
Ö
ich
/
.
1
0
1
1
6
2
/
T
l
A
C
_
A
_
0
0
2
4
9
1
5
6
7
6
7
6
/
/
T
l
A
C
_
A
_
0
0
2
4
9
P
D
.
F
B
j
G
u
e
S
T
T
Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3
premise, modeled as context ty
Plant cells have structures that animal cells lack.
Animal cells do not have cell walls.
The cell wall is not a freestanding structure.
Plant cells possess a cell wall, animals never.
0
1
0
1
Tisch 1: Examples of four premises for the hypothesis
tx = “A cell wall is not present in animal cells.” in
SCITAIL data set. Right column (hypothesis’s label):
“1” means true, “0” otherwise.
for some natural language understanding tasks that
are essentially sentence modeling within contexts.
Beispiele: textual entailment (is a hypothesis true
given a premise as the single context?; Dagan
et al. [2013]) and claim verification (is a claim cor-
rect given extracted evidence snippets from a text
corpus as the context?; Thorne et al. [2018]). Con-
sider the SCITAIL (Khot et al., 2018) textual en-
tailment examples in Table 1; Hier, the input text
tx is the hypothesis and each premise is a context
text ty. And consider the illustration of claim ver-
ification in Figure 1; Hier, the input text tx is the
claim and ty can consist of multiple pieces of con-
Text. In both cases, we would like the representa-
tion of tx to be context-specific.
In this work, we propose attentive convolution
Netzwerke, ATTCONV, to model a sentence (d.h.,
tx) either in intra-context (where ty = tx) or extra-
Kontext (where ty (cid:54)= tx and ty can have many
Stücke) scenarios. In the intra-context case (sen-
timent analysis, Zum Beispiel), ATTCONV extends
the local context window of standard CNNs to
cover the entire input text tx. In the extra-context
Fall, ATTCONV extends the local context win-
dow to cover accompanying contexts ty.
For a convolution operation over a window
in tx such as (leftcontext, word, rightcontext), Wir
first compare the representation of word with
all hidden states in the context ty to obtain
an attentive context representation attcontext, Dann
convolution filters derive a higher-level represen-
tation for word, denoted as wordnew, by integrat-
ing word with three pieces of context: leftcontext,
rightcontext, and attcontext. We interpret
this at-
tentive convolution in two perspectives.
(ich) Für
intra-context, a higher-level word representation
wordnew is learned by considering the local (d.h.,
leftcontext and rightcontext) as well as nonlocal (d.h.,
attcontext) Kontext. (ii) For extra-context, wordnew
is generated to represent word,
together with
its cross-text alignment attcontext, in the context
leftcontext and rightcontext. Mit anderen Worten, the deci-
sion for the word is made based on the connected
Figur 1: Verify claims in contexts.
hidden states of cross-text aligned terms, mit
local context.
We apply ATTCONV to three sentence mod-
eling tasks with variable-size context: a large-
scale Yelp sentiment classification task (Lin et al.,
2017) (intra-context, d.h., no additional context),
SCITAIL textual entailment (Khot et al., 2018)
(single extra-context),
and claim verification
(Thorne et al., 2018) (multiple extra-contexts).
ATTCONV outperforms competitive DNNs with
and without attention and achieves state-of-the-art
on the three tasks.
Gesamt, we make the following contributions:
• This is the first work that equips convolution
filters with the attention mechanism com-
monly used in RNNs.
• We distinguish and build flexible modules—
attention source, attention focus, and atten-
tion beneficiary—to greatly advance the ex-
pressivity of attention mechanisms in CNNs.
• ATTCONV provides a new way to broaden
the originally constrained scope of filters in
conventional CNNs. Broader and richer con-
text comes from either external context (d.h.,
ty) or the sentence itself (d.h., tx).
• ATTCONV shows its flexibility and effec-
tiveness in sentence modeling with variable-
size context.
2 Related Work
In this section we discuss attention-related DNNs
in NLP, the most relevant work for our paper.
2.1 RNNs with Attention
Graves (2013) and Graves et al. (2014) first in-
troduced a differentiable attention mechanism that
allows RNNs to focus on different parts of the
Eingang. This idea has been broadly explored in
to deal with text
RNNs, shown in Figure 2,
Generation, such as neural machine translation
688
l
D
Ö
w
N
Ö
A
D
e
D
F
R
Ö
M
H
T
T
P
:
/
/
D
ich
R
e
C
T
.
M
ich
T
.
e
D
u
/
T
A
C
l
/
l
A
R
T
ich
C
e
–
P
D
F
/
D
Ö
ich
/
.
1
0
1
1
6
2
/
T
l
A
C
_
A
_
0
0
2
4
9
1
5
6
7
6
7
6
/
/
T
l
A
C
_
A
_
0
0
2
4
9
P
D
.
F
B
j
G
u
e
S
T
T
Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3
maybe haoyofajrfngnovajrnvharyaojnbarlvjhnhjarnohg nvhyhnv va jmaybe haoyofajrfngnovajrnvharyaojnbarlvjhnhjarnohg nvhyhnv va jwhatno jmofjag as ajgonahnbjunaeorg varguoerguarg ag .arghoguerng mao rhg aerare hn kvarenb bhebnbnjb ye nerbhbjanrihbjrbn arebahofjrf Marilyn Monroe worked withWarner BrothersTelemundo is anEnglish-languagetelevision network. c1c2cicn…contextsclaimclasses
Figur 2: A simplified illustration of attention mecha-
nism in RNNs.
(Bahdanau et al., 2015; Luong et al., 2015; Kim
et al., 2017; Libovický and Helcl, 2017), response
generation in social media (Shang et al., 2015),
document reconstruction (Li et al., 2015), Und
document summarization (Nallapati et al., 2016);
machine comprehension (Hermann et al., 2015;
Kumar et al., 2016; Xiong et al., 2016; Seo et al.,
2017; Wang and Jiang, 2017; Xiong et al., 2017;
Wang et al., 2017A); and sentence relation classi-
fication, such as textual entailment (Cheng et al.,
2016; Rocktäschel et al., 2016; Wang and Jiang,
2016; Wang et al., 2017B; Chen et al., 2017B) Und
answer sentence selection (Miao et al., 2016).
We try to explore the RNN-style attention mech-
anisms in CNNs—more specifically, in convolution.
2.2 CNNs with Attention
In NLP, there is little work on attention-based
CNNs. Gehring et al. (2017) propose an attention-
based convolutional seq-to-seq model for machine
Übersetzung. Both the encoder and decoder are hi-
erarchical convolution layers. At the nth layer of
the decoder, the output hidden state of a convolu-
tion queries each of the encoder-side hidden states,
then a weighted sum of all encoder hidden states
is added to the decoder hidden state, and finally
this updated hidden state is fed to the convolution
at layer n + 1. Their attention implementation re-
lies on the existence of a multi-layer convolution
structure—otherwise the weighted context from
the encoder side could not play a role in the de-
coder. So essentially their attention is achieved af-
ter convolution. Im Gegensatz, we aim to modify the
vanilla convolution, so that CNNs with attentive
convolution can be applied more broadly.
We discuss two systems that are representative
of CNNs that implement the attention in pooling
(d.h., the convolution is still not affected): Yin
et al. (2016) and dos Santos et al. (2016), illus-
trated in Figure 3. Konkret, these two systems
work on two input sentences, each with a set of
Figur 3: Attentive pooling,
summarized from
ABCNN (Yin et al., 2016) and APCNN (dos Santos
et al., 2016).
hidden states generated by a convolution layer;
Dann, each sentence will learn a weight for ev-
ery hidden state by comparing this hidden state
with all hidden states in the other sentence; finally,
each input sentence obtains a representation by a
weighted mean pooling over all its hidden states.
The core component—weighted mean pooling—
was referred to as “attentive pooling,” aiming to
yield the sentence representation.
In contrast to attentive convolution, attentive
pooling does not connect directly the hidden states
of cross-text aligned phrases in a fine-grained
manner to the final decision making; only the
matching scores contribute to the final weighting
in mean pooling. This important distinction be-
tween attentive convolution and attentive pooling
is further discussed in Section 3.3.
Inspired by the attention mechanisms in RNNs,
we assume that it is the hidden states of aligned
phrases rather than their matching scores that can
better contribute to representation learning and deci-
sion making. Somit, our attentive convolution differs
from attentive pooling in that it uses attended hidden
states from extra context (d.h., ty) or broader-range
context within tx to participate in the convolution.
In experiments, we will show its superiority.
3 ATTCONV Model
We use bold uppercase (z.B., H) for matrices;
bold lowercase (z.B., H) for vectors; bold lower-
case with index (z.B., hi) for columns of H; Und
non-bold lowercase for scalars.
689
l
D
Ö
w
N
Ö
A
D
e
D
F
R
Ö
M
H
T
T
P
:
/
/
D
ich
R
e
C
T
.
M
ich
T
.
e
D
u
/
T
A
C
l
/
l
A
R
T
ich
C
e
–
P
D
F
/
D
Ö
ich
/
.
1
0
1
1
6
2
/
T
l
A
C
_
A
_
0
0
2
4
9
1
5
6
7
6
7
6
/
/
T
l
A
C
_
A
_
0
0
2
4
9
P
D
.
F
B
j
G
u
e
S
T
T
Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3
weightedsumattentivecontextsentence tysentence txhiddenstatesconvolutionconvolutioninter-hidden-statematchcolumn-wise compose row-wise compose sentence txsentence tyword embedding layer hidden states layer XYX⋅softmax()Y⋅softmax()representation:txrepresentation:ty (4×6)matchingscores
l
D
Ö
w
N
Ö
A
D
e
D
F
R
Ö
M
H
T
T
P
:
/
/
D
ich
R
e
C
T
.
M
ich
T
.
e
D
u
/
T
A
C
l
/
l
A
R
T
ich
C
e
–
P
D
F
/
D
Ö
ich
/
.
1
0
1
1
6
2
/
T
l
A
C
_
A
_
0
0
2
4
9
1
5
6
7
6
7
6
/
/
T
l
A
C
_
A
_
0
0
2
4
9
P
D
.
F
B
j
G
u
e
S
T
T
Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3
(A) Light attentive convolution layer
(B) Advanced attentive convolution layer
Figur 4: ATTCONV models sentence tx with context ty.
To start, we assume that a piece of text t (t ∈
{tx, ty}) is represented as a sequence of hidden
states hi ∈ Rd (i = 1, 2, . . . , |T|), forming feature
map H ∈ Rd×|T|, where d is the dimensionality
of hidden states. Each hidden state hi has its left
context li and right context ri. In concrete CNN
Systeme, contexts li and ri can cover multiple adja-
cent hidden states; we set li = hi−1 and ri = hi+1
for simplicity in the following description.
We now describe light and advanced versions
of ATTCONV. Recall that ATTCONVaims to com-
pute a representation for tx in a way that convolu-
tion filters encode not only local context, but also
attentive context over ty.
3.1 Light ATTCONV
Figur 4(A) shows the light version of ATTCONV.
It differs in two key points—(ich) Und (ii)—both
from the basic convolution layer that models a sin-
gle piece of text and from the Siamese CNN that
models two text pieces in parallel. (ich) A match-
ing function determines how relevant each hidden
state in the context ty is to the current hidden state
hx
in sentence tx. We then compute an average
ich
of the hidden states in the context ty, weighted by
the matching scores, to get the attentive context
i for hx
cx
(ii) The convolution for position i in
ich .
tx integrates hidden state hx
i with three sources of
Kontext: left context hx
i+1, Und
attentive context cx
ich .
i−1, right context hx
Attentive Context. Erste, a function generates a
matching score ei,j between a hidden state in tx
and a hidden state in ty by (ich) dot product:
NEIN,j = (hx
ich )T · hy
J
(1)
690
oder (ii) bilinear form:
NEIN,j = (hx
ich )T Wehy
J
(2)
(where We ∈ Rd×d), oder (iii) additive projection:
NEIN,j = (ve)T · tanh(We · hx
ich + Ue · hy
J )
(3)
where We, Ue ∈ Rd×d and ve ∈ Rd.
Given the matching scores, the attentive context
cx
i for hidden state hx
i is the weighted average of
all hidden states in ty:
cx
i =
(cid:88)
J
softmax(NEIN)j · hy
J
(4)
We refer to the concatenation of attentive contexts
[cx
|tx|] as the feature map Cx ∈
1; . . . ; cx
Rd×|tx| for tx.
ich ; . . . ; cx
Attentive Convolution. After attentive context
has been computed, a position i in the sentence tx
ich , the left context hx
has a hidden state hx
i−1, Die
i+1, and the attentive context cx
right context hx
ich .
Attentive convolution then generates the higher-
level hidden state at position i:
ich,new = tanh(W · [hx
hx
= tanh(W1 · [hx
W2 · cx
i−1, hx
i−1, hx
ich + B)
ich , hx
ich , hx
i+1, cx
i+1]+
ich ] + B)
(5)
(6)
where W ∈ Rd×4d is the concatenation of W1 ∈
Rd×3d and W2 ∈ Rd×d, b ∈ Rd.
As Equation (6) zeigt an, Gleichung (5) can be
achieved by summing up the results of two
separate and parallel convolution steps before
the non-linearity. Der Erste
is still a standard
convolution-without-attention over feature map
Hx by filter width 3 over the window (hx
i−1, hx
ich ,
hx
i+1). The second is a convolution on the feature
map Cx (d.h.,
the attentive context) with filter
width 1 (d.h., over each cx
ich ); then we sum up the
cihihi+1sentence txcontext tyattentivecontextattentiveconvolutionLayernLayern+1hi-1matchingattentivecontextattentiveconvolutionfbene(Hx)fmgran(Hx)fmgran(Hy)LayernLayern+1sourcefocusbeneficiarysentence txcontext ty
role
premise
Hypothese
Text
Three firefighters come out of subway station
Three firefighters putting out a fire inside
of a subway station
Tisch 2: Multi-granular alignments required in textual
entailment.
results element-wise and add a bias term and the non-
linearity. This divide-then-compose strategy makes
the attentive convolution easy to implement in
üben, with no need to create a new feature map,
as required in Equation (5), to integrate Hx and Cx.
It is worth mentioning that W1 ∈ Rd×3d cor-
responds to the filter parameters of a vanilla CNN
and the only added parameter here is W2 ∈ Rd×d,
which only depends on the hidden size.
This light ATTCONV shows the basic princi-
ples of using RNN-style attention mechanisms in
convolution. Our experiments show that this light
version of ATTCONV—even though it incurs a
limited increase of parameters (d.h., W2)—works
much better than the vanilla Siamese CNN and
some of the pioneering attentive RNNs. The fol-
lowing two considerations show that there is space
to improve its expressivity.
(ich) Higher-level or more abstract representa-
tions are required in subsequent layers. Wir finden
that directly forwarding the hidden states in tx or
ty to the matching process does not work well in
some tasks. Pre-learning some more high-level or
abstract representations helps in subsequent learn-
ing phases.
(ii) Multi-granular alignments are preferred
in the interaction modeling between tx and ty.
Tisch 2 shows another example of textual entail-
ment. On the unigram level, “out” in the premise
matches with “out” in the hypothesis perfectly,
whereas “out” in the premise is contradictory
to “inside” in the hypothesis. But their context
snippets—“come out” in the premise and “putting
out a fire” in the hypothesis—clearly indicate
that they are not semantically equivalent. And the
gold conclusion for this pair is “neutral” (d.h.,
the hypothesis is possibly true). daher, matching
should be conducted across phrase granularities.
We now present advanced ATTCONV. Es ist mehr
expressive and modular, based on the two forego-
ing considerations (ich) Und (ii).
3.2 Advanced ATTCONV
Adel and Schütze (2017) distinguish between
focus and source of attention. The focus of atten-
tion is the layer of the network that is reweighted
by attention weights. The source of attention is the
information source that is used to compute the
attention weights. Adel and Schütze showed that
increasing the scope of the attention source is
beneficial. It possesses some preliminary princi-
ples of the query/key/value distinction by Vaswani
et al. (2017). Hier, we further extend this princi-
ple to define beneficiary of attention – the feature
map (labeled “beneficiary” in Figure 4(B)) Das
is contextualized by the attentive context (beschriftet
“attentive context” in Figure 4(B)).
In the light
Die
attentive convolutional layer (Figur 4(A)),
source of attention is hidden states in sentence tx,
the focus of attention is hidden states of the con-
text ty, and the beneficiary of attention is again
the hidden states of tx; das ist, it is identical to the
source of attention.
We now try to distinguish these three con-
cepts further to promote the expressivity of an at-
tentive convolutional layer. We call it “advanced
ATTCONV”; siehe Abbildung 4(B). It differs from the
light version in three ways: (ich) attention source is
learned by function fmgran(Hx), feature map Hx
of tx acting as input; (ii) attention focus is learned
by function fmgran(Hy), feature map Hy of con-
text ty acting as input; Und (iii) attention benefi-
ciary is learned by function fbene(Hx), Hx acting
as input. Both functions fmgran() and fbene() Sind
based on a gated convolutional function fgconv():
oi = tanh(Wh · ii + bh)
gi = sigmoid(Wg · ii + bg)
fgconv(ii) = gi · ui + (1 − gi) · oi
(7)
(8)
(9)
where ii is a composed representation, denoting
a generally defined input phrase [· · · , ui, · · · ] von
arbitrary length with ui as the central unigram-
level hidden state, and the gate gi sets a trade-off
between the unigram-level input ui and the tem-
porary output oi at the phrase-level. We elaborate
these modules in the remainder of this subsection.
Attention Source. Erste, we present a general
instance of generating source of attention by func-
tion fmgran(H),
learning word representations
in multi-granular context. In our system, we con-
sider granularities 1 Und 3, corresponding to
unigram hidden state and trigram hidden state. Für
the uni-hidden state case, it is a gated convolution
layer:
uni,i = fgconv(hx
hx
ich )
(10)
691
l
D
Ö
w
N
Ö
A
D
e
D
F
R
Ö
M
H
T
T
P
:
/
/
D
ich
R
e
C
T
.
M
ich
T
.
e
D
u
/
T
A
C
l
/
l
A
R
T
ich
C
e
–
P
D
F
/
D
Ö
ich
/
.
1
0
1
1
6
2
/
T
l
A
C
_
A
_
0
0
2
4
9
1
5
6
7
6
7
6
/
/
T
l
A
C
_
A
_
0
0
2
4
9
P
D
.
F
B
j
G
u
e
S
T
T
Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3
For the tri-hidden state case:
tri,i = fgconv([hx
hx
i−1, hx
ich , hx
i+1])
(11)
Endlich, the overall hidden state at position i is the
concatenation of huni,i and htri,ich:
uni,ich, hx
tri,ich]
(12)
mgran,i = [hx
hx
das ist, fmgran(Hx) = Hx
mgran.
Such a kind of comprehensive hidden state can
encode the semantics of multigranular spans at
a position, such as “out” and “come out of.”
Gating here implicitly enables cross-granular
alignments in subsequent attention mechanism as
it sets highway connections (Srivastava et al.,
2015) between the input granularity and the output
granularity.
Attention Focus. For simplicity, we use the
same architecture for the attention source (just in-
troduced) and for the attention focus, ty (d.h., für
the attention focus: fmgran(Hy) = Hy
mgran; sehen
Figur 4(B)). Daher, the focus of attention will
participate in the matching process as well as be
reweighted to form an attentive context vector. Wir
leave exploring different architectures for atten-
tion source and focus for future work.
Another benefit of multi-granular hidden states
in attention focus is to keep structure information in
the context vector. In standard attention mechanisms
in RNNs, all hidden states are average-weighted
as a context vector, and the order information is
missing. By introducing hidden states of larger
granularity into CNNs that keep the local order or
structures, we boost the attentive effect.
Attention Beneficiary. In our system, we sim-
ply use fgconv() over uni-granularity to learn a
more abstract representation over the current hid-
den representations in Hx, so that
fbene(hx
ich ) = fgconv(hx
ich )
(13)
the attentive context vector cx
Subsequently,
ich
is generated based on attention source feature
map fmgran(Hx) and attention focus feature map
fmgran(Hy), according to the description of the
light ATTCONV. Then attentive convolution is
conducted over attention beneficiary feature map
fbene(Hx) and the attentive context vectors Cx to
get a higher-layer feature map for the sentence tx.
3.3 Analyse
Compared with the standard attention mechanism
in RNNs, ATTCONV has a similar matching func-
692
tion and a similar process of computing context
vectors, but differs in three ways. (ich) The dis-
crimination of attention source, focus, and ben-
eficiary improves expressivity. (ii) In CNNs, Die
surrounding hidden states for a concrete position
are available, so the attention matching is able to
encode the left context as well as the right con-
Text. In RNNs, Jedoch, we need bidirectional
RNNs to yield both left and right context
Darstellungen. (iii) As attentive convolution can
be implemented by summing up two separate
convolution steps (Equations 5 Und 6), this ar-
chitecture provides both attentive representations
and representations computed without
the use
of attention. This strategy is helpful in practice to
use richer representations for some NLP prob-
lems.
Im Gegensatz, such a clean modular separa-
tion of representations computed with and without
attention is harder to realize in attention-based
RNNs.
Prior attention mechanisms explored in CNNs
mostly involve attentive pooling (dos Santos et al.,
2016; Yin et al., 2016); nämlich, the weights of the
post-convolution pooling layer are determined by
attention. These weights come from the matching
process between hidden states of two text pieces.
Jedoch, a weight value is not informative enough
to tell the relationships between aligned terms. Con-
sider a textual entailment sentence pair for which
we need to determine whether “inside −→ outside”
holds. The matching degree (take cosine similar-
ity as example) of these two words is high: zum Beispiel-
reichlich, ≈ 0.7 in Word2Vec (Mikolov et al., 2013)
and GloVe (Pennington et al., 2014). Auf dem anderen
Hand, the matching score between “inside” and
“in” is lower: 0.31 in Word2Vec, 0.46 in GloVe.
Apparently, the higher number 0.7 does not mean
that “outside” is more likely than “in” to be en-
tailed by “inside.” Instead, joint representations
for aligned phrases [hinside, houtside], [hinside, hin]
are more informative and enable finer-grained rea-
soning than a mechanism that can only transmit
information downstream by matching scores. Wir
modify the conventional CNN filters so that “in-
side” can make the entailment decision by looking
at the representation of the counterpart term (“out-
side” or “in”) rather than a matching score.
A more damaging property of attentive pooling
is the following. Even if matching scores could
convey the phrase-level entailment degree to some
extent, matching weights, in fact, are not lever-
aged to make the entailment decision directly;
l
D
Ö
w
N
Ö
A
D
e
D
F
R
Ö
M
H
T
T
P
:
/
/
D
ich
R
e
C
T
.
M
ich
T
.
e
D
u
/
T
A
C
l
/
l
A
R
T
ich
C
e
–
P
D
F
/
D
Ö
ich
/
.
1
0
1
1
6
2
/
T
l
A
C
_
A
_
0
0
2
4
9
1
5
6
7
6
7
6
/
/
T
l
A
C
_
A
_
0
0
2
4
9
P
D
.
F
B
j
G
u
e
S
T
T
Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3
they are used to weight
stattdessen,
the sum of
the output hidden states of a convolution as the
global sentence representation. Mit anderen Worten,
fine-grained entailment degrees are likely to be
lost in the summation of many vectors. Das
illustrates why attentive context vectors partici-
pating in the convolution operation are expected
to be more effective than post-convolution atten-
tive pooling (more explanations in §4.3, paragraph
“Visualization”).
Intra-context attention and extra-context at-
Aufmerksamkeit. Figuren 4(A) Und 4(B) depict the model-
ing of a sentence tx with its context ty. Das ist
a common application of attention mechanism in
the literature; we call it extra-context attention.
But ATTCONV can also be applied to model a
single text input, das ist, intra-context attention.
Consider a sentiment analysis example: “With the
2017 NBA All-Star game in the books I think we
can all agree that this was definitely one to re-
member. Not because of the three-point shootout,
the dunk contest, or the game itself but because of
the ludicrous trade that occurred after the festivi-
ties.” This example contains informative points at
different locations (“remember” and “ludicrous”);
conventional CNNs’ ability to model nonlocal de-
pendency is limited because of fixed-size filter
widths. In ATTCONV, we can set ty = tx. Der
attentive context vector then accumulates all re-
lated parts together for a given position. In other
Wörter, our intra-context attentive convolution is
able to connect all related spans together to form
a comprehensive decision. This is a new way to
broaden the scope of conventional filter widths: A
filter now covers not only the local window, Aber
also those spans that are related, but are beyond
the scope of the window.
Comparison to Transformer.2 The “focus”
in ATTCONV corresponds to “key” and “value”
in Transformer;
das ist, our versions of “key”
and “value” are the same, coming from the con-
text sentence. The “query” in Transformer cor-
responds to the “source” and “beneficiary” of
ATTCONV; nämlich, our model has two perspec-
tives to utilize the context: one acts as a real
query (d.h., “source”) to attend the context, Die
andere (d.h., “beneficiary”) takes the attentive con-
2Our “source-focus-beneficiary” mechanism was inspired
by Adel and Schütze (2017). Vaswani et al. (2017) later pub-
lished the Transformer model, which has a similar “query-
key-value” mechanism.
text back to improve the learned representation of
selbst. If we reduce ATTCONV to unigram convo-
lutional filters, it is pretty much a single Trans-
former layer (if we neglect the positional encoding
in Transformer and unify the “query-key-value”
and “source-focus-beneficiary” mechanisms).
4 Experimente
We evaluate ATTCONV on sentence modeling in
three scenarios:
(ich) Zero-context, das ist, intra-
the same input sentence acts as tx as
Kontext;
well as ty; (ii) Single-context,
textual
entailment—hypothesis modeling with a single
premise as the extra-context; Und (iii) Multiple-
Kontext, nämlich, claim verification—claim mod-
eling with multiple extra-contexts.
das ist,
4.1 Common Set-up and Common Baselines
All experiments share a common set-up. The input
is represented using 300-dimensional publicly
available Word2Vec (Mikolov et al., 2013) em-
beddings; out of vocabulary embeddings are ran-
domly initialized. The architecture consists of the
following four layers in sequence: embedding,
attentive convolution, max-pooling, and logistic
regression. The context-aware representation of
tx is forwarded to the logistic regression layer.
We use AdaGrad (Duchi et al., 2011) for training.
Embeddings are fine-tuned during training. Hyper-
parameter values include: learning rate 0.01, versteckt
Größe 300, batch size 50, filter width 3.
All experiments are designed to explore com-
parisons in three aspects: (ich) within ATTCONV,
“light” vs. “advanced”; (ii) “attentive convolution”
vs. “attentive pooling”/“attention only”; Und (iii)
“attentive convolution” vs. “attentive RNN”.
Zu diesem Zweck, we always report “light” and
“advanced” ATTCONV performance and compare
against five types of common baselines: (ich) w/o
Kontext; (ii) w/o attention; (iii) w/o convolution:
Similar to the Transformer’s principle (Vaswani
et al., 2017), we discard the convolution oper-
ation in Equation (5) and forward the addition
of the attentive context cx
into a
fully connected layer. To keep enough parame-
ters, we stack in total four layers so that “w/o
convolution” has the same size of parameters as
light-ATTCONV; (iv) with attention: RNNs with
attention and CNNs with attentive pooling; Und (v)
prior state of the art, typeset in italics.
i and the hx
ich
693
l
D
Ö
w
N
Ö
A
D
e
D
F
R
Ö
M
H
T
T
P
:
/
/
D
ich
R
e
C
T
.
M
ich
T
.
e
D
u
/
T
A
C
l
/
l
A
R
T
ich
C
e
–
P
D
F
/
D
Ö
ich
/
.
1
0
1
1
6
2
/
T
l
A
C
_
A
_
0
0
2
4
9
1
5
6
7
6
7
6
/
/
T
l
A
C
_
A
_
0
0
2
4
9
P
D
.
F
B
j
G
u
e
S
T
T
Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3
Systeme
Paragraph Vector
Lin et al. Bi-LSTM
Lin et al. CNN
MultichannelCNN (Kim)
CNN+internal attention
ABCNN
APCNN
Attentive-LSTM
Lin et al. RNN Self-Att.
light
w/o convolution
advanced
acc
58.43
61.99
62.05
64.62
61.43
61.36
61.98
63.11
64.21
66.75
61.34
67.36∗
Ö
/
w
N
Ö
ich
T
N
e
T
T
A
H
T
ich
w
N
Ö
ich
T
N
e
T
T
A
T
T
A
V
N
Ö
C
Tisch 3: System comparison of sentiment analysis on
Yelp. Significant improvements over state of the art are
marked with ∗ (test of equal proportions, P < 0.05).
4.2 Sentence Modeling with Zero-context:
Sentiment Analysis
We evaluate sentiment analysis on a Yelp bench-
mark released by Lin et al. (2017): review-star
pairs in sizes 500K (train), 2,000 (dev), and 2,000
(test). Most text instances in this data set are
long: 25%, 50%, 75% percentiles are 46, 81,
and 125 words, respectively. The task is five-way
classification: 1 to 5 stars. The measure is accuracy.
We use this benchmark because the predominance
of long texts lets us evaluate the system perfor-
mance of encoding long-range context, and the
system by Lin et al. is directly related to ATTCONV
in intra-context scenario.
Baselines.
(i) w/o attention. Three baselines
from Lin et al. (2017): Paragraph Vector (Le
and Mikolov, 2014) (unsupervised sentence rep-
resentation learning), BiLSTM, and CNN. We
also reimplement MultichannelCNN (Kim, 2014),
recognized as a simple but surprisingly strong
sentence modeler. (ii) with attention. A vanilla
“Attentive-LSTM” by Rocktäschel et al. (2016).
“RNN Self-Attention” (Lin et al., 2017) is di-
rectly comparable to ATTCONV: it also uses intra-
context attention. “CNN+internal attention” (Adel
and Schütze, 2017), an intra-context attention idea
similar to, but less complicated than, Lin et al.
(2017). ABCNN & APCNN – CNNs with atten-
tive pooling.
Results and Analysis. Table 3 shows that
advanced-ATTCONV surpasses its “light” coun-
terpart, and obtains significant improvement over
the state of the art.
Figure 5: ATTCONV vs. MultichannelCNN for
lengths.
groups of Yelp text with ascending text
ATTCONV performs more robustly across different
lengths of text.
In addition, ATTCONV surpasses attentive pool-
ing (ABCNN&APCNN) with a big margin (>5%)
and outperforms the representative attentive-LSTM
(>4%).
Außerdem,
it outperforms the two self-
attentive models: CNN+internal attention (Adel
and Schütze, 2017) and RNN Self-Attention (Lin
et al., 2017), which are specifically designed
for single-sentence modeling. Adel and Schütze
(2017) generate an attention weight for each CNN
hidden state by a linear transformation of the same
hidden state, then compute weighted average over
all hidden states as the text representation. Lin
et al. (2017) extend that idea by generating a
group of attention weight vectors, then RNN hid-
den states are averaged by those diverse weighted
vectors, allowing extracting different aspects of
the text into multiple vector representations. Beide
works are essentially weighted mean pooling, sim-
ilar to the attentive pooling in Yin et al. (2016) Und
dos Santos et al. (2016).
Nächste, we compare ATTCONV with Multichan-
nelCNN,
the strongest baseline system (“w/o
attention”), for different length ranges to check
whether ATTCONV can really encode long-range
context effectively. We sort the 2,000 test instances
by length, then split them into 10 groups, jede
consisting of 200 instances. Figur 5 shows per-
formance of ATTCONV vs. MultichannnelCNN.
We observe that ATTCONV consistently outper-
forms MultichannelCNN for all lengths. Weiter-
mehr, the improvement over MultichannelCNN
generally increases with length. This is evidence
that ATTCONV more effectively models long text.
694
l
D
Ö
w
N
Ö
A
D
e
D
F
R
Ö
M
H
T
T
P
:
/
/
D
ich
R
e
C
T
.
M
ich
T
.
e
D
u
/
T
A
C
l
/
l
A
R
T
ich
C
e
–
P
D
F
/
D
Ö
ich
/
.
1
0
1
1
6
2
/
T
l
A
C
_
A
_
0
0
2
4
9
1
5
6
7
6
7
6
/
/
T
l
A
C
_
A
_
0
0
2
4
9
P
D
.
F
B
j
G
u
e
S
T
T
Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3
12345678910indices of sorted text groups0.580.600.620.640.660.680.70accMultichannelCNNATTCONV0.6+diff of two curves
#instances
23,596
1,304
2,126
27,026
#entail
8,602
657
842
10,101
#neutral
14,994
647
1,284
16,925
train
dev
test
total
Tisch 4: Statistics of SCITAIL data set.
Ö
/
w
N
Ö
ich
T
N
e
T
T
A
H
T
ich
w
N
Ö
ich
T
N
e
T
T
A
acc
Systeme
60.4
Majority Class
65.1
w/o Context
69.5
Bi-LSTM
70.6
NGram model
Bi-CNN
74.4
Enhanced LSTM 70.6
Attentive-LSTM 71.5
72.3
Decomp-Att
77.3
DGEM
75.2
APCNN
75.8
ABCNN
78.1
75.1
79.2
ATTCONV-light
w/o convolution
ATTCONV-advanced
Tisch 5: ATTCONV vs. baselines on SCITAIL.
This is likely because of ATTCONV’s capability to
encode broader context in its filters.
4.3 Sentence Modeling with a Single Context:
Textual Entailment
Data Set. SCITAIL (Khot et al., 2018) is a textual
entailment benchmark designed specifically for a
real-world task: multi-choice question answering.
All hypotheses tx were obtained by rephrasing
(question, correct answer) pairs into single sen-
tences, and premises ty are relevant Web sentences
retrieved by an information retrieval method. Dann
the task is to determine whether a hypothesis is
true or not, given a premise as context. Alle (tx, ty)
pairs are annotated via crowdsourcing. Accuracy
is reported. Tisch 1 shows examples and Table 4
gives statistics.
By this construction, a substantial performance
improvement on SCITAIL is equivalent to a better
QA performance (Khot et al., 2018). The hypoth-
esis tx is the target sentence, and the premise ty
acts as its context.
Baselines. Apart from the common baselines
(see Section 4.1), we include systems covered
by Khot et al. (2018): (ich) n-gram Overlap: Ein
overlap baseline, considering lexical granularity
695
such as unigrams, one-skip bigrams, and one-
skip trigrams. (ii) Decomposable Attention Model
(Decomp-Att) (Parikh et al., 2016): Explore atten-
tion mechanisms to decompose the task into sub-
(iii) Enhanced LSTM
tasks to solve in parallel.
(Chen et al., 2017B): Enhance LSTM by taking
into account syntax and semantics from parsing
Information.
(iv) DGEM (Khot et al., 2018): A
decomposed graph entailment model, the current
state-of-the-art.
Tisch 5 presents results on SCITAIL. (ich) Within
ATTCONV, “advanced” beats “light” by 1.1%;
(ii) “w/o convolution” and attentive pooling (d.h.,
ABCNN & APCNN) get lower performances by
3%–4%; (iii) More complicated attention mech-
anisms equipped into LSTM (z.B., “attentive-
LSTM” and “enhanced-LSTM”) perform even
worse.
Error Analysis.
To better understand the
ATTCONV in SCITAIL, we study some error
cases listed in Table 6.
Language conventions. Pair #1 uses sequen-
tial commas (d.h., in “the egg, larva, pupa, Und
adult”) or a special symbol sequence (d.h., in “egg
−> larva −> pupa −> adult”) to form a set or
sequence; pair #2 has “A (or B)” to express the
equivalence of A and B. This challenge is expected
to be handled by DNNs with specific training signals.
In #3, “be-
cause smaller amounts of water evaporate in the
cool morning” cannot be inferred from the premise
ty directly. The main challenge in #4 is to dis-
tinguish “weight” from “force,” which requires
background physical knowledge that is beyond the
presented text here and beyond the expressivity of
word embeddings.
Knowledge beyond the text ty.
Complex discourse relation. The premise in #5
has an “or” structure. In #6, the inserted phrase
“with about 16,000 species” makes the connection
between “nonvascular plants” and “the mosses,
liverworts, and hornworts” hard to detect. Beide
instances require the model to decode the dis-
course relation.
ATTCONV on SNLI. Tisch 7 shows the com-
parison. We observe that: (ich) classifying hypothe-
ses without looking at premises,
das ist, “w/o
context” baseline, results in a large improvement
over the “majority baseline.” This verifies the
strong bias in the hypothesis construction of the
SNLI data set (Gururangan et al., 2018; Poliak
et al., 2018). (ii) ATTCONV (advanced) surpasses
l
D
Ö
w
N
Ö
A
D
e
D
F
R
Ö
M
H
T
T
P
:
/
/
D
ich
R
e
C
T
.
M
ich
T
.
e
D
u
/
T
A
C
l
/
l
A
R
T
ich
C
e
–
P
D
F
/
D
Ö
ich
/
.
1
0
1
1
6
2
/
T
l
A
C
_
A
_
0
0
2
4
9
1
5
6
7
6
7
6
/
/
T
l
A
C
_
A
_
0
0
2
4
9
P
D
.
F
B
j
G
u
e
S
T
T
Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3
#
1
2
3
4
5
6
(Premise ty, Hypothesis tx) Pair
(ty) These insects have 4 life stages, the egg, larva, pupa, and adult.
(tx) The sequence egg −> larva −> pupa −> adult shows the life cycle
of some insects.
(ty) . . . the notochord forms the backbone (or vertebral column).
(tx) Backbone is another name for the vertebral column.
(ty) Water lawns early in the morning . . . prevent evaporation.
(tx) Watering plants and grass in the early morning is a way to conserve water
because smaller amounts of water evaporate in the cool morning.
(ty) . . . the SI unit . . . for force is the Newton (N) and is defined as (kg·m/s−2 ).
(tx) Newton (N) is the SI unit for weight.
(ty) Heterotrophs get energy and carbon from living plants or animals
(consumers) or from dead organic matter (decomposers).
(tx) Mushrooms get their energy from decomposing dead organisms.
(ty) . . . are a diverse assemblage of three phyla of nonvascular plants, mit
um 16,000 Spezies, that includes the mosses, liverworts, and hornworts.
(tx) Moss is best classified as a nonvascular plant.
G/P Challenge
1/0
1/0
Sprache
conventions
Sprache
conventions
1/0
beyond text
0/1
beyond text
0/1
1/0
Diskurs
relation
Diskurs
relation
Tisch 6: Error cases of ATTCONV in SCITAIL. „. . . ”: truncated text. “G/P”: gold/predicted label.
Ö
/
w
N
Ö
ich
T
N
e
T
T
A
H
T
ich
w
N
Ö
ich
T
N
e
T
T
A
0
Systeme
#para acc
34.3
majority class
w/o context (d.h., hypothesis only) 270K 68.7
220K 77.6
Bi-LSTM (Bowman et al., 2015)
270K 80.3
Bi-CNN
3.5M 82.1
Tree-CNN (Mou et al., 2016)
6.3M 84.8
NES (Munkhdalai and Yu, 2017)
250K 83.5
Attentive-LSTM (Rocktäschel)
95M 84.4
Self-Attentive (Lin et al., 2017)
1.9M 86.1
Match-LSTM (Wang and Jiang)
3.4M 86.3
LSTMN (Cheng et al., 2016)
580K 86.8
Decomp-Att (Parikh)
7.7M 88.6
Enhanced LSTM (Chen et al., 2017B)
ABCNN (Yin et al., 2016)
834K 83.7
APCNN (dos Santos et al., 2016) 360K 83.9
360K 86.3
360K 84.9
900K 87.8
8M 88.7
ATTCONV – light
w/o convolution
ATTCONV – advanced
State-of-the-art (Peters et al., 2018)
Evolution (Figur 6(A)) and attentive pooling
(Figur 6(B)).
(nach
(ich) NEIN,J
in sentence tx; (ii) hx
softmax), which shows
Figur 6(A) explores the visualization of two
kinds of features learned by light ATTCONV in
SNLI data set (most are short sentences with
in Equa-
rich phrase-level reasoning):
tion (1)
Die
attention distribution over context ty by the hidden
state hx
ich,new in Equation (5)
ich
for i = 1, 2, · · · , |tx|;
it shows the context-
aware word features in tx. By the two visual-
ized features, we can identify which parts of the
context ty are more important for a word in sen-
tence tx, and a max-pooling, over those context-
driven word representations, selects and forwards
dominant (word, leftcontext, rightcontext, attcontext)
combinations to the final decision maker.
Tisch 7: Performance comparison on SNLI test. En-
semble systems are not included.
all “w/o attention” baselines and “with attention”
CNN baselines (d.h., attentive pooling), obtaining
a performance (87.8%) that is close to the state of
the art (88.7%).
We also report
the parameter size in SNLI
as most baseline systems did. Tisch 7 zeigt an
in comparison to these baselines, unser
Das,
ATTCONV (light and advanced) has a more lim-
ited number of parameters, yet its performance is
competitive.
Visualisierung.
In Abbildung 6, we visualize the
attention mechanisms explored in attentive con-
Figur 6(A) shows the features3 of sentence tx
= “A dog jumping for a Frisbee in the snow” con-
ditioned on the context ty = “An animal is out-
side in the cold weather, playing with a plastic
toy.” Observations:
(ich) The right figure shows
that the attention mechanism successfully aligns
some cross-sentence phrases that are informative
to the textual entailment problem, such as “dog”
to “animal” (d.h., cx
dog ≈ “animal”), “Frisbee”
to “plastic toy” and “playing” (d.h., cx
F risbee ≈
“plastic toy”+“playing”); (ii) The left figure shows
a max-pooling over the generated features of
filter_1 and filter_2 will focus on the context-
aware phrases (A, Hund, jumping, cx
Hund) Und (A,
3For simplicity, we show 2 out of 300 ATTCONV filters.
696
l
D
Ö
w
N
Ö
A
D
e
D
F
R
Ö
M
H
T
T
P
:
/
/
D
ich
R
e
C
T
.
M
ich
T
.
e
D
u
/
T
A
C
l
/
l
A
R
T
ich
C
e
–
P
D
F
/
D
Ö
ich
/
.
1
0
1
1
6
2
/
T
l
A
C
_
A
_
0
0
2
4
9
1
5
6
7
6
7
6
/
/
T
l
A
C
_
A
_
0
0
2
4
9
P
D
.
F
B
j
G
u
e
S
T
T
Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3
(A) Visualization for features generated by ATTCONV’s filters on sentence tx and ty. A max-pooling, over filter_1, locates
the phrase (A, Hund, jumping, cx
dog” (resp. cx
F ris.)—the
attentive context of “dog” (resp. “Frisbee”) in tx—mainly comes from “animal” (resp. “toy” and “playing”) in ty.
Hund), and locates the phrase (A, Frisbee, In, cx
F risbee) via filter_2. “cx
l
D
Ö
w
N
Ö
A
D
e
D
F
R
Ö
M
H
T
T
P
:
/
/
D
ich
R
e
C
T
.
M
ich
T
.
e
D
u
/
T
A
C
l
/
l
A
R
T
ich
C
e
–
P
D
F
/
D
Ö
ich
/
.
1
0
1
1
6
2
/
T
l
A
C
_
A
_
0
0
2
4
9
1
5
6
7
6
7
6
/
/
T
l
A
C
_
A
_
0
0
2
4
9
P
D
.
F
B
j
G
u
e
S
T
T
Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3
(B) Attention visualization for attentive pooling (ABCNN). Based on the words in tx and ty, Zuerst, a convolution layer with
filter width 3 outputs hidden states for each sentence, then each hidden state will obtain an attention weight for how well this
hidden state matches towards all the hidden states in the other sentence, and finally all hidden states in each sentence will be
weighted and summed up as the sentence representation. This visualization shows that the spans “dog jumping for” and “in
the snow” in tx and the spans “animal is outside” and “in the cold” in ty are most indicative to the entailment reasoning.
Figur 6: Attention visualization for attentive convolution (top) and attentive pooling (bottom) between sentence
tx = “A dog jumping for a Frisbee in the snow” (links) and sentence ty = “An animal is outside in the cold weather,
playing with a plastic toy” (Rechts).
Frisbee, In, cx
F risbee) jeweils; the two phrases
are crucial to the entailment reasoning for this (ty,
tx) pair.
Figur 6(B) shows the phrase-level (d.h., jede
consecutive trigram) attentions after the convolu-
tion operation. As Figure 3 zeigt an, a subsequent
pooling step will weight and sum up those phrase-
level hidden states as an overall sentence represen-
Station. Also, even though some phrases such as “in
the snow” in tx and “in the cold” in ty show im-
portance in this pair instance, the final sentence
representation still (ich) lacks a fine-grained phrase-
to-phrase reasoning, Und (ii) underestimates some
indicative phrases such as “A dog” in tx and “An
animal” in ty.
Briefly, attentive convolution first performs
phrase-to-phrase,
Dann
composes features; attentive pooling composes
inter-sentence reasoning,
697
Adogforainthe.snowjumpingFrisbeecxdogcxFris.Anincold,isthewithtoy.aanimaloutsideweatherplayingplastictxtyAdogforainthe.snowjumpingFrisbeeAnincold,isthewithtoy.aanimaloutsideweatherplayingplastictxtyconvolutionoutput (filter width=3)
#SUPPORTED #REFUTED
29,775
3,333
3,333
80,035
3,333
3,333
#NEI
35,639
3,333
3,333
train
dev
test
Tisch 8: Statistics of claims in the FEVER data set.
phrase features as sentence representations, Dann
Intuitively, attentive convo-
performs reasoning.
lution better fits the way humans conduct entail-
ment reasoning, and our experiments validate its
superiority—it is the hidden states of the aligned
phrases rather than their matching scores that support
better representation learning and decision-making.
The comparisons in both SCITAIL and SNLI
show that:
• CNNs with attentive
convolution (d.h.,
ATTCONV) outperform the CNNs with at-
tentive pooling (d.h., ABCNN and APCNN);
• Some competitors got over-tuned on SNLI
while demonstrating mediocre performance
in SCITAIL—a real-world NLP task. Our sys-
tem ATTCONV shows its robustness in both
benchmark data sets.
4.4 Sentence Modeling with Multiple Contexts:
Claim Verification
Data Set. For this task, we use FEVER (Thorne
et al., 2018); it infers the truthfulness of claims by
extracted evidence. The claims in FEVER were
manually constructed from the introductory sec-
tions of about 50K popular Wikipedia articles in
the June 2017 dump. Claims have 9.4 tokens on
average. Tisch 8 lists the claim statistics.
In addition to claims, FEVER also provides a
Wikipedia corpus of approximately 5.4 million ar-
Artikel, from which gold evidences are gathered and
provided. Figur 7 shows the distributions of sen-
tence sizes in FEVER’s ground truth evidence set
(d.h., the context size in our experimental set-up).
We can see that roughly 28% of evidence instances
cover more than one sentence and roughly 16%
cover more than two sentences.
Each claim is labeled as SUPPORTED, RE-
FUTED, or NOTENOUGHINFO (NEI) given the
gold evidence. The standard FEVER task also
explores the performance of evidence extraction,
evaluated by F1 between extracted evidence and
gold evidence. This work focuses on the claim en-
tailment part, assuming the evidences are provided
(extracted or gold). More specifically, we treat a
claim as tx, and its evidence sentences as context ty.
Figur 7: Distribution of #sentence in FEVER evi-
dence.
This task has two evaluations:
(ich) ALL—
accuracy of claim verification regardless of the
validness of evidence; (ii) SUBSET—verification
accuracy of a subset of claims, in which the gold
evidence for SUPPORTED and REFUTED claims
must be fully retrieved. We use the official eval-
uation toolkit.4
Set-ups.
(ich) We adopt the same retrieved evi-
dence set (i.e, contexts ty) as Thorne et al. (2018):
top-5 most relevant sentences from top-5 retrieved
wiki pages by a document retriever (Chen et al.,
2017A). The quality of this evidence set against the
ground truth is: 44.22 (recall), 10.44 (precision),
16.89 (F1) on dev, Und 45.89 (recall), 10.79 (pre-
Entscheidung), 17.47 (F1) on test. This set-up challenges
our system with potentially unrelated or even mis-
leading context. (ii) We use the ground truth evi-
dence as context. This lets us determine how far
our ATTCONV can go for this claim verification
problem once the accurate evidence is given.
Baselines. We first include the two systems ex-
plored by Thorne et al. (2018): (ich) MLP: A multi-
layer perceptron baseline with a single hidden
layer, based on tf-idf cosine similarity between the
claim and the evidence (Riedel et al., 2017); (ii)
Decomp-Att (Parikh et al., 2016): A decompos-
able attention model that is tested in SCITAIL and
SNLI before. Note that both baselines first relied
on an information retrieval system to extract the
top-5 relevant sentences from the retrieved top-5
wiki pages as evidence for claims, then concate-
nated all evidence sentences as a longer context
for a claim.
4https://github.com/sheffieldnlp/fever-
scorer.
698
l
D
Ö
w
N
Ö
A
D
e
D
F
R
Ö
M
H
T
T
P
:
/
/
D
ich
R
e
C
T
.
M
ich
T
.
e
D
u
/
T
A
C
l
/
l
A
R
T
ich
C
e
–
P
D
F
/
D
Ö
ich
/
.
1
0
1
1
6
2
/
T
l
A
C
_
A
_
0
0
2
4
9
1
5
6
7
6
7
6
/
/
T
l
A
C
_
A
_
0
0
2
4
9
P
D
.
F
B
j
G
u
e
S
T
T
Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3
12345678910>10#context for each claim sentence051015…%12.134.072.852.901.760.980.680.490.401.8571.88
System
MLP
Bi-CNN
APCNN
ABCNN
Attentive-LSTM
Decomp-Att
ATTCONV
v
e
D
retrie. evi.
gold
evi.
ALL SUB
41.86 19.04 65.13
47.82 26.99 75.02
50.75 30.24 78.91
51.39 32.44 77.13
52.47 33.19 78.44
52.09 32.57 80.82
light,context-wise
w/o conv.
light,context-conc
w/o conv.
57.78 34.29 83.20
47.29 25.94 73.18
59.31 37.75 84.74
48.02 26.67 73.44
advan.,context-wise 60.20 37.94 84.99
advan.,context-conc 62.26 39.44 86.02
T (Thorne et al., 2018)
S
e
T
ATTCONV
50.91 31.87
61.03 38.77 84.61
–
Tisch 9: Performance on dev and test of FEVER. In
“gold evi.” scenario, ALL SUBSET are the same.
Wir
then consider
two variants of our
tx
ATTCONV in dealing with modeling of
with variable-size context ty. (ich) Context-wise:
we first use all evidence sentences one by one as
context ty to guide the representation learning of
the claim tx, generating a group of context-aware
representation vectors for the claim,
then we
do element-wise max-pooling over this vector
group as the final representation of the claim. (ii)
Context-conc: concatenate all evidence sentences
as a single piece of context,
Die
claim based on this context. This is the same
preprocessing step as Thorne et al. (2018) did.
then model
Ergebnisse. Tisch 9 compares our ATTCONV in dif-
ferent set-ups against the baselines. Erste, ATTCONV
surpasses
the top competitor “Decomp-Att,”
reported in Thorne et al. (2018), with big mar-
gins in dev (ALL: 62.26 vs.
52.09) and test
(ALL: 61.03 vs. 50.91). Zusätzlich, “advanced-
ATTCONV” consistently outperforms its “light”
counterpart. Darüber hinaus, ATTCONV surpasses at-
tentive pooling (d.h., ABCNN & APCNN) Und
“attentive-LSTM” by >10% in ALL, >6% in SUB
and >8% in “gold evi.”
Figur 8 further explores the fine-grained per-
formance of ATTCONV for different sizes of gold
evidence (d.h., different sizes of context ty). Der
system shows comparable performances for sizes
1 Und 2. Even for context sizes larger than 5, Es
only drops by 5%.
699
l
D
Ö
w
N
Ö
A
D
e
D
F
R
Ö
M
H
T
T
P
:
/
/
D
ich
R
e
C
T
.
M
ich
T
.
e
D
u
/
T
A
C
l
/
l
A
R
T
ich
C
e
–
P
D
F
/
D
Ö
ich
/
.
1
0
1
1
6
2
/
T
l
A
C
_
A
_
0
0
2
4
9
1
5
6
7
6
7
6
/
/
T
l
A
C
_
A
_
0
0
2
4
9
P
D
.
F
B
j
G
u
e
S
T
T
Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3
Figur 8: Fine-grained ATTCONV performance given
variable-size golden FEVER evidence as claim’s con-
Text.
These experiments on claim verification clearly
show the effectiveness of ATTCONV in sen-
tence modeling with variable-size context. Das
should be attributed to the attention mechanism in
ATTCONV, which enables a word or a phrase in
the claim tx to “see” and accumulate all related
clues even if those clues are scattered across mul-
tiple contexts ty.
Error Analysis. We do error analysis for “re-
trieved evidence” scenario.
Error case #1 is due to the failure of fully re-
trieving all evidence. Zum Beispiel, a successful
support of the claim “Weekly Idol has a host born
in the year 1978” requires the information compo-
sition from three evidence sentences, two from the
wiki article “Weekly Idol,” and one from “Jeong
Hyeong-don.” However, only one of them is
retrieved in the top-5 candidates. Our system pre-
dicts REFUTED. This error is more common in
instances for which no evidence is retrieved.
Error case #2 is due to the insufficiency of rep-
resentation learning. Consider the wrong claim
in REFUTED
“Corsica belongs to Italy” (d.h.,
Klasse). Even though good evidence is retrieved, Die
system is misled by noise evidence: “It is located
. . . west of the Italian Peninsula, with the nearest
land mass being the Italian island . . . ”.
Error case #3 is due to the lack of advanced data
preprocessing. For a human, it is very easy to “re-
fute” the claim “Telemundo is an English-language
television network” by the evidence “Telemundo
is an American Spanish-language terrestrial tele-
vision . . . ” (from the “Telemundo” wikipage), von
checking the keyphrases: “Spanish-language” vs.
“English-language.” Unfortunately, both tokens
are unknown words in our system; as a result,
12345>5gold #context for each claim81.582.082.583.083.584.084.585.0acc (%)
they do not have informative embeddings. A more
careful data preprocessing is expected to help.
5 Summary
We presented ATTCONV, the first work that en-
ables CNNs to acquire the attention mechanism
commonly used in RNNs. ATTCONV combines
the strengths of CNNs with the strengths of the
RNN attention mechanism. Einerseits,
it makes broad and rich context available for
prediction, either context from external
inputs
(extra-context) or internal inputs (intra-context).
Andererseits, it can take full advantage of
the strengths of convolution:
It is more order-
sensitive than attention in RNNs and local-context
information can be powerfully and efficiently
modeled through convolution filters. Our experi-
ments demonstrate the effectiveness and flexibil-
ity of ATTCONV when modeling sentences with
variable-size context.
Danksagungen
Das
We gratefully acknowledge funding for
work by the European Research Council (ERC
#740516). We would like to thank the anonymous
reviewers for their helpful comments.
Verweise
Heike Adel and Hinrich Schütze. 2017. Exploring
different dimensions of attention for uncertainty
detection. In Proceedings of EACL, pages 22–34,
Valencia, Spanien.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua
Bengio. 2015. Neural machine translation by
jointly learning to align and translate. In Pro-
ceedings of ICLR, San Diego, USA.
Samuel R. Bowman, Gabor Angeli, Christopher
Potts, and Christopher D. Manning. 2015. A
large annotated corpus for learning natural lan-
guage inference. In Proceedings of EMNLP,
pages 632–642, Lisbon, Portugal.
Danqi Chen, Adam Fisch, Jason Weston, Und
Antoine Bordes. 2017A. Reading Wikipedia to
answer open-domain questions. In Proceedings
of ACL, pages 1870–1879, Vancouver, Kanada.
Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei,
Hui Jiang, and Diana Inkpen. 2017B. Enhanced
LSTM for natural language inference. In Pro-
ceedings of ACL, pages 1657–1668, Vancouver,
Kanada.
Jianpeng Cheng, Li Dong, and Mirella Lapata.
2016. Long short-term memory-networks for
In Proceedings of EMNLP,
machine reading.
pages 551–561, Austin, USA.
Ronan Collobert, Jason Weston, Léon Bottou,
Michael Karlen, Koray Kavukcuoglu, and Pavel
P. Kuksa. 2011. Natural language processing
(almost) from scratch. Journal of Machine
Learning Research, 12:2493–2537.
Ido Dagan, Dan Roth, Mark Sammons, Und
Fabio Massimo Zanzotto. 2013. Recognizing
Textual Entailment: Models and Applications.
Synthesis Lectures on Human Language Tech-
nologies. Morgan & Claypool.
John Duchi, Elad Hazan, and Yoram Singer. 2011.
Adaptive subgradient methods for online learn-
ing and stochastic optimization. Journal of Ma-
chine Learning Research, 12:2121–2159.
Jeffrey L. Elman. 1990. Finding structure in time.
Cognitive Science, 14(2):179–211.
Jonas Gehring, Michael Auli, David Grangier,
Denis Yarats, and Yann N. Dauphin. 2017.
Convolutional sequence to sequence learning.
In Proceedings of ICML, pages 1243–1252,
Sydney, Australia.
Alex Graves. 2013. Generating sequences with re-
current neural networks. CoRR, abs/1308.0850.
Alex Graves, Greg Wayne, and Ivo Danihelka.
2014. Neural turing machines. CoRR, abs/1410.5401.
Suchin Gururangan, Swabha Swayamdipta, Omer
Erheben, Roy Schwartz, Samuel R. Bowman, Und
Noah A. Schmied. 2018. Annotation artifacts in
natural language inference data. In Proceedings
of NAACL-HLT, pages 107–112, New Orleans,
USA.
Karl Moritz Hermann, Tomás Kociský, Edward
Grefenstette, Lasse Espeholt, Will Kay, Mustafa
Suleyman, and Phil Blunsom. 2015. Teach-
ing machines to read and comprehend. In Pro-
ceedings of NIPS, pages 1693–1701, Montreal,
Kanada.
700
l
D
Ö
w
N
Ö
A
D
e
D
F
R
Ö
M
H
T
T
P
:
/
/
D
ich
R
e
C
T
.
M
ich
T
.
e
D
u
/
T
A
C
l
/
l
A
R
T
ich
C
e
–
P
D
F
/
D
Ö
ich
/
.
1
0
1
1
6
2
/
T
l
A
C
_
A
_
0
0
2
4
9
1
5
6
7
6
7
6
/
/
T
l
A
C
_
A
_
0
0
2
4
9
P
D
.
F
B
j
G
u
e
S
T
T
Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3
Nal Kalchbrenner, Edward Grefenstette, and Phil
Blunsom. 2014. A convolutional neural net-
work for modelling sentences. In Proceedings
of ACL, pages 655–665, Baltimore, USA.
Tushar Khot, Ashish Sabharwal, and Peter Clark.
2018. SciTaiL: A textual entailment dataset
from science question answering. In Proceed-
ings of AAAI, pages 5189–5197, New Orleans,
USA.
Yoon Kim. 2014. Convolutional neural networks
for sentence classification. In Proceedings of
EMNLP, pages 1746–1751, Doha, Qatar.
Yoon Kim, Carl Denton, Luong Hoang, Und
Alexander M. Rush. 2017. Structured atten-
tion networks. In Proceedings of ICLR, Toulon,
Frankreich.
Ankit Kumar, Ozan Irsoy, Peter Ondruska,
Mohit Iyyer, James Bradbury, Ishaan Gulrajani,
Victor Zhong, Romain Paulus, and Richard
Socher. 2016. Ask me anything: Dynamic
memory networks for natural language process-
ing. In Proceedings of ICML, pages 1378–1387,
New York City, USA.
Quoc Le and Tomas Mikolov. 2014. Distributed
representations of sentences and documents.
In Proceedings of ICML, pages 1188–1196,
Peking, China.
Yann LeCun, Léon Bottou, Yoshua Bengio, Und
Patrick Haffner. 1998. Gradient-based learning
applied to document recognition. Verfahren
of the IEEE, 86(11):2278–2324.
Jiwei Li, Minh-Thang Luong, and Dan Jurafsky.
2015. A hierarchical neural autoencoder for
paragraphs and documents. In Proceedings of
ACL, pages 1106–1115, Peking, China.
Jindrich Libovický and Jindrich Helcl. 2017. Bei-
tention strategies for multi-source sequence-
to-sequence learning. In Proceedings of ACL,
pages 196–202, Vancouver, Kanada.
Zhouhan Lin, Minwei Feng, Cícero Nogueira dos
Santos, Mo Yu, Bing Xiang, Bowen Zhou,
and Yoshua Bengio. 2017. A structured self-
attentive sentence embedding. In Proceedings
of ICLR, Toulon, Frankreich.
Minh-Thang Luong, Hieu Pham, and Christopher
D. Manning. 2015. Effective approaches to
attention-based neural machine translation. In
Proceedings of EMNLP, pages 1412–1421,
Lisbon, Portugal.
Yishu Miao, Lei Yu, and Phil Blunsom. 2016.
Neural variational inference for text processing.
In Proceedings of ICML, pages 1727–1736,
New York City, USA.
Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory
S. Corrado, and Jeffrey Dean. 2013. Dis-
tributed representations of words and phrases
and their compositionality. In Proceedings of
NIPS, pages 3111–3119, Lake Tahoe, USA.
Lili Mou, Rui Men, Ge Li, Yan Xu, Lu Zhang, Rui
Yan, and Zhi Jin. 2016. Natural language in-
ference by tree-based convolution and heuristic
matching. In Proceedings of ACL, pages 130–136,
Berlin, Deutschland.
Tsendsuren Munkhdalai and Hong Yu. 2017.
In Proceedings of
Neural semantic encoders.
EACL, pages 397–407, Valencia, Spanien.
Ramesh Nallapati, Bowen Zhou, Cícero Nogueira
dos Santos, Çaglar Gülçehre, and Bing Xiang.
2016. Abstractive text summarization using
sequence-to-sequence rnns and beyond. In Pro-
ceedings of CoNLL, pages 280–290, Berlin,
Deutschland.
Ankur P. Parikh, Oscar Täckström, Dipanjan Das,
and Jakob Uszkoreit. 2016. A decomposable
attention model for natural language inference.
In Proceedings of EMNLP, pages 2249–2255,
Austin, USA.
Jeffrey Pennington, Richard Socher, and Christopher
D. Manning. 2014. GloVe: Global vectors for
word representation. In Proceedings of EMNLP,
pages 1532–1543, Doha, Qatar.
Matthew E. Peters, Mark Neumann, Mohit Iyyer,
Matt Gardner, Christopher Clark, Kenton Lee,
and Luke Zettlemoyer. 2018. Deep contextu-
alized word representations. In Proceedings of
NAACL-HLT, pages 2227–2237, New Orleans,
USA.
Adam Poliak, Jason Naradowsky, Aparajita Haldar,
Rachel Rudinger, and Benjamin Van Durme.
2018. Hypothesis only baselines in natural
701
l
D
Ö
w
N
Ö
A
D
e
D
F
R
Ö
M
H
T
T
P
:
/
/
D
ich
R
e
C
T
.
M
ich
T
.
e
D
u
/
T
A
C
l
/
l
A
R
T
ich
C
e
–
P
D
F
/
D
Ö
ich
/
.
1
0
1
1
6
2
/
T
l
A
C
_
A
_
0
0
2
4
9
1
5
6
7
6
7
6
/
/
T
l
A
C
_
A
_
0
0
2
4
9
P
D
.
F
B
j
G
u
e
S
T
T
Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3
language inference. In Proceedings of *SEM,
pages 180–191, New Orleans, USA.
Benjamin Riedel, Isabelle Augenstein, Georgios P.
Spithourakis, and Sebastian Riedel. 2017. A
simple but tough-to-beat baseline for the fake
news challenge stance detection task. CoRR,
abs/1707.03264.
Tim Rocktäschel, Edward Grefenstette, Karl
Moritz Hermann, Tomáš Koˇcisk`y, and Phil
Blunsom. 2016. Reasoning about entailment
with neural attention. In Proceedings of ICLR,
San Juan, Puerto Rico.
Cícero Nogueira dos Santos, Ming Tan, Bing
Xiang, and Bowen Zhou. 2016. Attentive pool-
ing networks. CoRR, abs/1602.03609.
Min Joon Seo, Aniruddha Kembhavi, Ali Farhadi,
and Hannaneh Hajishirzi. 2017. Bidirectional
attention flow for machine comprehension. In
Proceedings of ICLR, Toulon, Frankreich.
Lifeng Shang, Zhengdong Lu, and Hang Li.
2015. Neural responding machine for short-
In Proceedings of ACL,
text conversation.
pages 1577–1586, Peking, China.
Rupesh Kumar Srivastava, Klaus Greff, Und
Jürgen Schmidhuber. 2015. Training very
In Proceedings of NIPS,
deep networks.
pages 2377–2385, Montreal, Kanada.
James Thorne, Andreas Vlachos, Christos
Christodoulopoulos, and Arpit Mittal. 2018.
FEVER: A large-scale dataset for fact extraction
and verification. In Proceedings of NAACL-
HLT, pages 809–819, New Orleans, USA.
Ashish Vaswani, Noam Shazeer, Niki Parmar,
Jakob Uszkoreit, Llion Jones, Aidan N. Gomez,
Lukasz Kaiser, and Illia Polosukhin. 2017. Bei-
tention is all you need. In Proceedings of NIPS,
pages 6000–6010, Long Beach, USA.
Shuohang Wang and Jing Jiang. 2016. Learn-
ing natural language inference with LSTM. In
Proceedings of NAACL-HLT, pages 1442–1451,
San Diego, USA.
Shuohang Wang and Jing Jiang. 2017. Machine
comprehension using match-LSTM and an-
swer pointer. In Proceedings of ICLR, Toulon,
Frankreich.
Wenhui Wang, Nan Yang, Furu Wei, Baobao
Chang, and Ming Zhou. 2017A. Gated self-
matching networks for reading comprehension
and question answering. In Proceedings of ACL,
pages 189–198, Vancouver, Kanada.
Zhiguo Wang, Wael Hamza, and Radu Florian.
2017B. Bilateral multi-perspective matching for
natural language sentences. In Proceedings of
IJCAI, pages 4144–4150, Melbourne, Australia.
Caiming Xiong, Stephen Merity, and Richard
Socher. 2016. Dynamic memory networks for
visual and textual question answering. In Pro-
ceedings of ICML, pages 2397–2406, New York
City, USA.
Caiming Xiong, Victor Zhong, and Richard
Socher. 2017. Dynamic coattention networks for
question answering. In Proceedings of ICLR,
Toulon, Frankreich.
Wenpeng Yin, Hinrich Schütze, Bing Xiang, Und
Bowen Zhou. 2016. ABCNN: Attention-based
convolutional neural network for modeling sen-
tence pairs. TACL, 4:259–272.
702
l
D
Ö
w
N
Ö
A
D
e
D
F
R
Ö
M
H
T
T
P
:
/
/
D
ich
R
e
C
T
.
M
ich
T
.
e
D
u
/
T
A
C
l
/
l
A
R
T
ich
C
e
–
P
D
F
/
D
Ö
ich
/
.
1
0
1
1
6
2
/
T
l
A
C
_
A
_
0
0
2
4
9
1
5
6
7
6
7
6
/
/
T
l
A
C
_
A
_
0
0
2
4
9
P
D
.
F
B
j
G
u
e
S
T
T
Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3
PDF Herunterladen