Attentive Convolution:

Attentive Convolution:
Equipping CNNs with RNN-style Attention Mechanisms

Wenpeng Yin
Department of Computer and Information
Ciencia, Universidad de Pennsylvania
wenpeng@seas.upenn.edu

Hinrich Schütze
Center for Information and Language
Procesando, LMU Munich, Alemania
inquiries@cislmu.org

Abstracto

In NLP, convolutional neural networks
(CNNs) have benefited less than recur-
rent neural networks (RNNs) from attention
mechanisms. We hypothesize that this is be-
cause the attention in CNNs has been mainly
implemented as attentive pooling (es decir., es
applied to pooling) rather than as attentive
convolution (es decir., it is integrated into con-
volution). Convolution is the differentiator
of CNNs in that it can powerfully model
the higher-level representation of a word by
taking into account its local fixed-size con-
text in the input text tx.
En este trabajo, nosotros
propose an attentive convolution network,
ATTCONV. It extends the context scope of
the convolution operation, deriving higher-
level features for a word not only from
local context, but also from information ex-
tracted from nonlocal context by the atten-
tion mechanism commonly used in RNNs.
This nonlocal context can come (i) de
parts of the input text tx that are distant
o (ii) from extra (es decir., externo) contextos
ty. Experiments on sentence modeling with
zero-context (sentiment analysis), single-
contexto (textual entailment) and multiple-
contexto (claim verification) demonstrate the
effectiveness of ATTCONV in sentence rep-
resentation learning with the incorporation
of context. En particular, attentive convo-
lution outperforms attentive pooling and
is a strong competitor to popular attentive
RNNs.1

1

Introducción

Natural language processing (NLP) has benefited
greatly from the resurgence of deep neural net-
obras (DNNs), thanks to their high performance
with less need of engineered features. A DNN typ-
ically is composed of a stack of non-linear trans-

1https://github.com/yinwenpeng/Attentive_

Convolution.

687

formation layers, each generating a hidden rep-
resentation for the input by projecting the output
of a preceding layer into a new space. Hasta la fecha,
building a single and static representation to ex-
press an input across diverse problems is far from
satisfactory. En cambio, it is preferable that the rep-
resentation of the input vary in different applica-
tion scenarios. En respuesta, attention mechanisms
(Tumbas, 2013; Graves et al., 2014) have been pro-
posed to dynamically focus on parts of the in-
put that are expected to be more specific to the
problema. They are mostly implemented based on
fine-grained alignments between two pieces of ob-
jects, each emitting a dynamic soft-selection to the
components of the other, so that the selected ele-
ments dominate in the output hidden representa-
ción. Attention-based DNNs have demonstrated good
performance on many tasks.

Convolutional neural networks (CNNs; LeCun
et al., 1998) and recurrent neural networks (RNNs;
elman, 1990) are two important types of DNNs.
Most work on attention has been done for RNNs.
Attention-based RNNs typically take three types
of inputs to make a decision at the current step:
(i) the current input state, (ii) a representation of
local context (computed unidirectionally or bidi-
rectionally; Rocktäschel et al.
[2016]), y (iii)
the attention-weighted sum of hidden states cor-
responding to nonlocal context (p.ej., the hidden
states of the encoder in neural machine translation;
Bahdanau et al. [2015]). An important question,
por lo tanto, is whether CNNs can benefit from such
an attention mechanism as well, y cómo. Esto es
our technical motivation.

Our second motivation is natural language un-
derstanding. In generic sentence modeling without
extra context (Collobert et al., 2011; Kalchbrenner
et al., 2014; kim, 2014), CNNs learn sentence rep-
resentations by composing word representations
that are conditioned on a local context window.
We believe that attentive convolution is needed

Transacciones de la Asociación de Lingüística Computacional, volumen. 6, páginas. 687–702, 2018. Editor de acciones: eslavo petrov.
Lote de envío: 6/2018; Lote de revisión: 10/2018; Publicado 12/2018.
C(cid:13) 2018 Asociación de Lingüística Computacional. Distribuido bajo CC-BY 4.0 licencia.

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
2
4
9
1
5
6
7
6
7
6

/

/
t

yo

a
C
_
a
_
0
0
2
4
9
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

premise, modeled as context ty
Plant cells have structures that animal cells lack.
Animal cells do not have cell walls.
The cell wall is not a freestanding structure.
Plant cells possess a cell wall, animals never.

0
1
0
1

Mesa 1: Examples of four premises for the hypothesis
tx = “A cell wall is not present in animal cells.” in
SCITAIL data set. Right column (hypothesis’s label):
“1” means true, “0” otherwise.

for some natural language understanding tasks that
are essentially sentence modeling within contexts.
Examples: textual entailment (is a hypothesis true
given a premise as the single context?; Dagan
et al. [2013]) and claim verification (is a claim cor-
rect given extracted evidence snippets from a text
corpus as the context?; Thorne y otros. [2018]). Estafa-
sider the SCITAIL (Khot et al., 2018) textual en-
tailment examples in Table 1; aquí, the input text
tx is the hypothesis and each premise is a context
text ty. And consider the illustration of claim ver-
ification in Figure 1; aquí, the input text tx is the
claim and ty can consist of multiple pieces of con-
texto. In both cases, we would like the representa-
tion of tx to be context-specific.

En este trabajo, we propose attentive convolution
redes, ATTCONV, to model a sentence (es decir.,
tx) either in intra-context (where ty = tx) or extra-
contexto (where ty (cid:54)= tx and ty can have many
pieces) escenarios. In the intra-context case (sen-
timent analysis, Por ejemplo), ATTCONV extends
the local context window of standard CNNs to
cover the entire input text tx. In the extra-context
caso, ATTCONV extends the local context win-
dow to cover accompanying contexts ty.

For a convolution operation over a window
in tx such as (leftcontext, palabra, rightcontext), nosotros
first compare the representation of word with
all hidden states in the context ty to obtain
an attentive context representation attcontext, entonces
convolution filters derive a higher-level represen-
tation for word, denoted as wordnew, by integrat-
ing word with three pieces of context: leftcontext,
rightcontext, and attcontext. We interpret
this at-
tentive convolution in two perspectives.
(i) Para
intra-context, a higher-level word representation
wordnew is learned by considering the local (es decir.,
leftcontext and rightcontext) as well as nonlocal (es decir.,
attcontext) contexto. (ii) For extra-context, wordnew
is generated to represent word,
together with
its cross-text alignment attcontext, in the context
leftcontext and rightcontext. En otras palabras, the deci-
sion for the word is made based on the connected

Cifra 1: Verify claims in contexts.

hidden states of cross-text aligned terms, con
local context.

We apply ATTCONV to three sentence mod-
eling tasks with variable-size context: un gran-
scale Yelp sentiment classification task (Lin et al.,
2017) (intra-context, es decir., no additional context),
SCITAIL textual entailment (Khot et al., 2018)
(single extra-context),
and claim verification
(Thorne et al., 2018) (multiple extra-contexts).
ATTCONV outperforms competitive DNNs with
and without attention and achieves state-of-the-art
on the three tasks.

En general, we make the following contributions:

• This is the first work that equips convolution
filters with the attention mechanism com-
monly used in RNNs.

• We distinguish and build flexible modules—
attention source, attention focus, and atten-
tion beneficiary—to greatly advance the ex-
pressivity of attention mechanisms in CNNs.
• ATTCONV provides a new way to broaden
the originally constrained scope of filters in
conventional CNNs. Broader and richer con-
text comes from either external context (es decir.,
ty) or the sentence itself (es decir., tx).

• ATTCONV shows its flexibility and effec-
tiveness in sentence modeling with variable-
size context.

2 Trabajo relacionado

In this section we discuss attention-related DNNs
in NLP, the most relevant work for our paper.

2.1 RNNs with Attention

Tumbas (2013) and Graves et al. (2014) first in-
troduced a differentiable attention mechanism that
allows RNNs to focus on different parts of the
aporte. This idea has been broadly explored in
to deal with text
RNNs, como se muestra en la figura 2,
generación, such as neural machine translation

688

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
2
4
9
1
5
6
7
6
7
6

/

/
t

yo

a
C
_
a
_
0
0
2
4
9
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

maybe haoyofajrfngnovajrnvharyaojnbarlvjhnhjarnohg nvhyhnv va jmaybe haoyofajrfngnovajrnvharyaojnbarlvjhnhjarnohg nvhyhnv va jwhatno jmofjag as ajgonahnbjunaeorg varguoerguarg ag .arghoguerng mao rhg aerare hn kvarenb bhebnbnjb ye nerbhbjanrihbjrbn arebahofjrf Marilyn Monroe worked withWarner BrothersTelemundo is anEnglish-languagetelevision network. c1c2cicncontextsclaimclasses

Cifra 2: A simplified illustration of attention mecha-
nism in RNNs.

(Bahdanau et al., 2015; Luong et al., 2015; kim
et al., 2017; Libovický and Helcl, 2017), respuesta
generation in social media (Shang et al., 2015),
document reconstruction (Le et al., 2015), y
document summarization (Nallapati et al., 2016);
machine comprehension (Hermann et al., 2015;
Kumar et al., 2016; Xiong et al., 2016; Seo et al.,
2017; Wang and Jiang, 2017; Xiong et al., 2017;
Wang y cols., 2017a); and sentence relation classi-
ficación, such as textual entailment (Cheng et al.,
2016; Rocktäschel et al., 2016; Wang and Jiang,
2016; Wang y cols., 2017b; Chen et al., 2017b) y
answer sentence selection (Miao et al., 2016).

We try to explore the RNN-style attention mech-
anisms in CNNs—more specifically, in convolution.

2.2 CNNs with Attention

In NLP, there is little work on attention-based
CNNs. Gehring et al. (2017) propose an attention-
based convolutional seq-to-seq model for machine
traducción. Both the encoder and decoder are hi-
erarchical convolution layers. At the nth layer of
the decoder, the output hidden state of a convolu-
tion queries each of the encoder-side hidden states,
then a weighted sum of all encoder hidden states
is added to the decoder hidden state, and finally
this updated hidden state is fed to the convolution
at layer n + 1. Their attention implementation re-
lies on the existence of a multi-layer convolution
structure—otherwise the weighted context from
the encoder side could not play a role in the de-
coder. So essentially their attention is achieved af-
ter convolution. A diferencia de, we aim to modify the
vanilla convolution, so that CNNs with attentive
convolution can be applied more broadly.

We discuss two systems that are representative
of CNNs that implement the attention in pooling
(es decir., the convolution is still not affected): Yin
et al. (2016) and dos Santos et al. (2016), illus-
trated in Figure 3. Específicamente, these two systems
work on two input sentences, each with a set of

Cifra 3: Attentive pooling,
summarized from
ABCNN (Yin et al., 2016) and APCNN (dos Santos
et al., 2016).

hidden states generated by a convolution layer;
entonces, each sentence will learn a weight for ev-
ery hidden state by comparing this hidden state
with all hidden states in the other sentence; finally,
each input sentence obtains a representation by a
weighted mean pooling over all its hidden states.
The core component—weighted mean pooling—
was referred to as “attentive pooling,” aiming to
yield the sentence representation.

In contrast to attentive convolution, attentive
pooling does not connect directly the hidden states
of cross-text aligned phrases in a fine-grained
manner to the final decision making; only the
matching scores contribute to the final weighting
in mean pooling. This important distinction be-
tween attentive convolution and attentive pooling
is further discussed in Section 3.3.

Inspired by the attention mechanisms in RNNs,
we assume that it is the hidden states of aligned
phrases rather than their matching scores that can
better contribute to representation learning and deci-
sion making. Por eso, our attentive convolution differs
from attentive pooling in that it uses attended hidden
states from extra context (es decir., ty) or broader-range
context within tx to participate in the convolution.
In experiments, we will show its superiority.

3 ATTCONV Model

We use bold uppercase (p.ej., h) for matrices;
bold lowercase (p.ej., h) for vectors; bold lower-
case with index (p.ej., hi) for columns of H; y
non-bold lowercase for scalars.

689

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
2
4
9
1
5
6
7
6
7
6

/

/
t

yo

a
C
_
a
_
0
0
2
4
9
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

weightedsumattentivecontextsentence tysentence txhiddenstatesconvolutionconvolutioninter-hidden-statematchcolumn-wise compose row-wise compose sentence txsentence tyword embedding layer hidden states layer XYX⋅softmax()Y⋅softmax()representación:txrepresentation:ty (4×6)matchingscores

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
2
4
9
1
5
6
7
6
7
6

/

/
t

yo

a
C
_
a
_
0
0
2
4
9
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

(a) Light attentive convolution layer

(b) Advanced attentive convolution layer

Cifra 4: ATTCONV models sentence tx with context ty.

To start, we assume that a piece of text t (t ∈
{tx, ty}) is represented as a sequence of hidden
states hi ∈ Rd (i = 1, 2, . . . , |t|), forming feature
map H ∈ Rd×|t|, where d is the dimensionality
of hidden states. Each hidden state hi has its left
context li and right context ri. In concrete CNN
sistemas, contexts li and ri can cover multiple adja-
cent hidden states; we set li = hi−1 and ri = hi+1
for simplicity in the following description.

We now describe light and advanced versions
of ATTCONV. Recall that ATTCONVaims to com-
pute a representation for tx in a way that convolu-
tion filters encode not only local context, pero también
attentive context over ty.

3.1 Light ATTCONV

Cifra 4(a) shows the light version of ATTCONV.
It differs in two key points—(i) y (ii)—both
from the basic convolution layer that models a sin-
gle piece of text and from the Siamese CNN that
models two text pieces in parallel. (i) A match-
ing function determines how relevant each hidden
state in the context ty is to the current hidden state
hx
in sentence tx. We then compute an average
i
of the hidden states in the context ty, weighted by
the matching scores, to get the attentive context
i for hx
cx
(ii) The convolution for position i in
i .
tx integrates hidden state hx
i with three sources of
contexto: left context hx
i+1, y
attentive context cx
i .

i−1, right context hx

Attentive Context. Primero, a function generates a
matching score ei,j between a hidden state in tx
and a hidden state in ty by (i) dot product:

ei,j = (hx

i )T · hy

j

(1)

690

o (ii) bilinear form:

ei,j = (hx

i )T Wehy

j

(2)

(where We ∈ Rd×d), o (iii) additive projection:

ei,j = (ve)T · tanh(We · hx

i + Ue · hy
j )

(3)

where We, Ue ∈ Rd×d and ve ∈ Rd.

Given the matching scores, the attentive context
cx
i for hidden state hx
i is the weighted average of
all hidden states in ty:

cx
i =

(cid:88)

j

softmax(ei)j · hy
j

(4)

We refer to the concatenation of attentive contexts
[cx
|tx|] as the feature map Cx ∈
1; . . . ; cx
Rd×|tx| for tx.

i ; . . . ; cx

Attentive Convolution. After attentive context
has been computed, a position i in the sentence tx
i , the left context hx
has a hidden state hx
i−1, el
i+1, and the attentive context cx
right context hx
i .
Attentive convolution then generates the higher-
level hidden state at position i:

i,new = tanh(W · [hx
hx
= tanh(W1 · [hx
W2 · cx

i−1, hx
i−1, hx
i + b)

i , hx
i , hx

i+1, cx
i+1]+

i ] + b)

(5)

(6)

where W ∈ Rd×4d is the concatenation of W1 ∈
Rd×3d and W2 ∈ Rd×d, b ∈ Rd.

As Equation (6) muestra, Ecuación (5) can be
achieved by summing up the results of two
separate and parallel convolution steps before
the non-linearity. el primero
is still a standard
convolution-without-attention over feature map
Hx by filter width 3 over the window (hx
i−1, hx
i ,
hx
i+1). The second is a convolution on the feature
map Cx (es decir.,
the attentive context) with filter
width 1 (es decir., over each cx
i ); then we sum up the

cihihi+1sentence txcontext tyattentivecontextattentiveconvolutionLayernLayern+1hi-1matchingattentivecontextattentiveconvolutionfbene(Hx)fmgran(Hx)fmgran(Hy)LayernLayern+1sourcefocusbeneficiarysentence txcontext ty

role
premise

hypothesis

texto
Three firefighters come out of subway station
Three firefighters putting out a fire inside
of a subway station

Mesa 2: Multi-granular alignments required in textual
entailment.

results element-wise and add a bias term and the non-
linearity. This divide-then-compose strategy makes
the attentive convolution easy to implement in
práctica, with no need to create a new feature map,
as required in Equation (5), to integrate Hx and Cx.
It is worth mentioning that W1 ∈ Rd×3d cor-
responds to the filter parameters of a vanilla CNN
and the only added parameter here is W2 ∈ Rd×d,
which only depends on the hidden size.

This light ATTCONV shows the basic princi-
ples of using RNN-style attention mechanisms in
convolution. Our experiments show that this light
version of ATTCONV—even though it incurs a
limited increase of parameters (es decir., W2)—works
much better than the vanilla Siamese CNN and
some of the pioneering attentive RNNs. The fol-
lowing two considerations show that there is space
to improve its expressivity.

(i) Higher-level or more abstract representa-
tions are required in subsequent layers. encontramos
that directly forwarding the hidden states in tx or
ty to the matching process does not work well in
some tasks. Pre-learning some more high-level or
abstract representations helps in subsequent learn-
ing phases.

(ii) Multi-granular alignments are preferred
in the interaction modeling between tx and ty.
Mesa 2 shows another example of textual entail-
mento. On the unigram level, “out” in the premise
matches with “out” in the hypothesis perfectly,
whereas “out” in the premise is contradictory
to “inside” in the hypothesis. But their context
snippets—“come out” in the premise and “putting
out a fire” in the hypothesis—clearly indicate
that they are not semantically equivalent. And the
gold conclusion for this pair is “neutral” (es decir.,
the hypothesis is possibly true). Por lo tanto, matching
should be conducted across phrase granularities.

We now present advanced ATTCONV. It is more
expressive and modular, based on the two forego-
ing considerations (i) y (ii).

3.2 Advanced ATTCONV

Adel and Schütze (2017) distinguish between
focus and source of attention. The focus of atten-

tion is the layer of the network that is reweighted
by attention weights. The source of attention is the
information source that is used to compute the
attention weights. Adel and Schütze showed that
increasing the scope of the attention source is
beneficial. It possesses some preliminary princi-
ples of the query/key/value distinction by Vaswani
et al. (2017). Aquí, we further extend this princi-
ple to define beneficiary of attention – the feature
map (labeled “beneficiary” in Figure 4(b)) eso
is contextualized by the attentive context (labeled
“attentive context” in Figure 4(b)).
In the light
el
attentive convolutional layer (Cifra 4(a)),
source of attention is hidden states in sentence tx,
the focus of attention is hidden states of the con-
text ty, and the beneficiary of attention is again
the hidden states of tx; eso es, it is identical to the
source of attention.

We now try to distinguish these three con-
cepts further to promote the expressivity of an at-
tentive convolutional layer. We call it “advanced
ATTCONV”; ver figura 4(b). It differs from the
light version in three ways: (i) attention source is
learned by function fmgran(Hx), feature map Hx
of tx acting as input; (ii) attention focus is learned
by function fmgran(Hy), feature map Hy of con-
text ty acting as input; y (iii) attention benefi-
ciary is learned by function fbene(Hx), Hx acting
as input. Both functions fmgran() and fbene() son
based on a gated convolutional function fgconv():

oi = tanh(Wh · ii + bh)

gi = sigmoid(Wg · ii + bg)

fgconv(ii) = gi · ui + (1 − gi) · oi

(7)

(8)

(9)

where ii is a composed representation, denoting
a generally defined input phrase [· · · , ui, · · · ] de
arbitrary length with ui as the central unigram-
level hidden state, and the gate gi sets a trade-off
between the unigram-level input ui and the tem-
porary output oi at the phrase-level. We elaborate
these modules in the remainder of this subsection.
Attention Source. Primero, we present a general
instance of generating source of attention by func-
tion fmgran(h),
learning word representations
in multi-granular context. In our system, nosotros estafamos-
sider granularities 1 y 3, correspondiente a
unigram hidden state and trigram hidden state. Para
the uni-hidden state case, it is a gated convolution
capa:

uni,i = fgconv(hx
hx
i )

(10)

691

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
2
4
9
1
5
6
7
6
7
6

/

/
t

yo

a
C
_
a
_
0
0
2
4
9
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

For the tri-hidden state case:

tri,i = fgconv([hx
hx

i−1, hx

i , hx

i+1])

(11)

Finalmente, the overall hidden state at position i is the
concatenation of huni,i and htri,i:
uni,i, hx

tri,i]

(12)

mgran,i = [hx
hx
eso es, fmgran(Hx) = Hx

mgran.

Such a kind of comprehensive hidden state can
encode the semantics of multigranular spans at
a position, such as “out” and “come out of.”
Gating here implicitly enables cross-granular
alignments in subsequent attention mechanism as
it sets highway connections (Srivastava et al.,
2015) between the input granularity and the output
granularity.

Attention Focus. Por simplicidad, we use the
same architecture for the attention source (just in-
troduced) and for the attention focus, ty (es decir., para
the attention focus: fmgran(Hy) = Hy
mgran; ver
Cifra 4(b)). De este modo, the focus of attention will
participate in the matching process as well as be
reweighted to form an attentive context vector. Nosotros
leave exploring different architectures for atten-
tion source and focus for future work.

Another benefit of multi-granular hidden states
in attention focus is to keep structure information in
the context vector. In standard attention mechanisms
in RNNs, all hidden states are average-weighted
as a context vector, and the order information is
desaparecido. By introducing hidden states of larger
granularity into CNNs that keep the local order or
estructuras, we boost the attentive effect.

Attention Beneficiary. In our system, we sim-
ply use fgconv() over uni-granularity to learn a
more abstract representation over the current hid-
den representations in Hx, de modo que

fbene(hx

i ) = fgconv(hx
i )

(13)

the attentive context vector cx
Después,
i
is generated based on attention source feature
map fmgran(Hx) and attention focus feature map
fmgran(Hy), according to the description of the
light ATTCONV. Then attentive convolution is
conducted over attention beneficiary feature map
fbene(Hx) and the attentive context vectors Cx to
get a higher-layer feature map for the sentence tx.

3.3 Análisis

Compared with the standard attention mechanism
in RNNs, ATTCONV has a similar matching func-

692

tion and a similar process of computing context
vectores, but differs in three ways. (i) The dis-
crimination of attention source, focus, and ben-
eficiary improves expressivity. (ii) In CNNs, el
surrounding hidden states for a concrete position
are available, so the attention matching is able to
encode the left context as well as the right con-
texto. In RNNs, sin embargo, we need bidirectional
RNNs to yield both left and right context
representaciones. (iii) As attentive convolution can
be implemented by summing up two separate
convolution steps (Ecuaciones 5 y 6), this ar-
chitecture provides both attentive representations
and representations computed without
the use
of attention. This strategy is helpful in practice to
use richer representations for some NLP prob-
lemas.
A diferencia de, such a clean modular separa-
tion of representations computed with and without
attention is harder to realize in attention-based
RNNs.

Prior attention mechanisms explored in CNNs
mostly involve attentive pooling (dos Santos et al.,
2016; Yin et al., 2016); a saber, the weights of the
post-convolution pooling layer are determined by
atención. These weights come from the matching
process between hidden states of two text pieces.
Sin embargo, a weight value is not informative enough
to tell the relationships between aligned terms. Estafa-
sider a textual entailment sentence pair for which
we need to determine whether “inside −→ outside”
sostiene. The matching degree (take cosine similar-
ity as example) of these two words is high: for ex-
amplio, ≈ 0.7 in Word2Vec (Mikolov et al., 2013)
and GloVe (Pennington et al., 2014). En el otro
mano, the matching score between “inside” and
“in” is lower: 0.31 in Word2Vec, 0.46 in GloVe.
Apparently, the higher number 0.7 does not mean
that “outside” is more likely than “in” to be en-
tailed by “inside.” Instead, joint representations
for aligned phrases [hinside, houtside], [hinside, hin]
are more informative and enable finer-grained rea-
soning than a mechanism that can only transmit
information downstream by matching scores. Nosotros
modify the conventional CNN filters so that “in-
side” can make the entailment decision by looking
at the representation of the counterpart term (“out-
side” or “in”) rather than a matching score.

A more damaging property of attentive pooling
is the following. Even if matching scores could
convey the phrase-level entailment degree to some
extent, matching weights, En realidad, are not lever-
aged to make the entailment decision directly;

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
2
4
9
1
5
6
7
6
7
6

/

/
t

yo

a
C
_
a
_
0
0
2
4
9
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

they are used to weight

en cambio,
the sum of
the output hidden states of a convolution as the
global sentence representation. En otras palabras,
fine-grained entailment degrees are likely to be
lost in the summation of many vectors. Este
illustrates why attentive context vectors partici-
pating in the convolution operation are expected
to be more effective than post-convolution atten-
tive pooling (more explanations in §4.3, párrafo
“Visualization”).

Intra-context attention and extra-context at-
tention. Figures 4(a) y 4(b) depict the model-
ing of a sentence tx with its context ty. Esto es
a common application of attention mechanism in
the literature; we call it extra-context attention.
But ATTCONV can also be applied to model a
single text input, eso es, intra-context attention.
Consider a sentiment analysis example: “With the
2017 NBA All-Star game in the books I think we
can all agree that this was definitely one to re-
member. Not because of the three-point shootout,
the dunk contest, or the game itself but because of
the ludicrous trade that occurred after the festivi-
ties.” This example contains informative points at
different locations (“remember” and “ludicrous”);
conventional CNNs’ ability to model nonlocal de-
pendency is limited because of fixed-size filter
widths. In ATTCONV, we can set ty = tx. El
attentive context vector then accumulates all re-
lated parts together for a given position. En otra
palabras, our intra-context attentive convolution is
able to connect all related spans together to form
a comprehensive decision. This is a new way to
broaden the scope of conventional filter widths: A
filter now covers not only the local window, pero
also those spans that are related, but are beyond
the scope of the window.

Comparison to Transformer.2 The “focus”
in ATTCONV corresponds to “key” and “value”
in Transformer;
eso es, our versions of “key”
and “value” are the same, coming from the con-
text sentence. The “query” in Transformer cor-
responds to the “source” and “beneficiary” of
ATTCONV; a saber, our model has two perspec-
tives to utilize the context: one acts as a real
query (es decir., “source”) to attend the context, el
otro (es decir., “beneficiary”) takes the attentive con-

2Our “source-focus-beneficiary” mechanism was inspired
by Adel and Schütze (2017). Vaswani et al. (2017) later pub-
lished the Transformer model, which has a similar “query-
key-value” mechanism.

text back to improve the learned representation of
sí mismo. If we reduce ATTCONV to unigram convo-
lutional filters, it is pretty much a single Trans-
former layer (if we neglect the positional encoding
in Transformer and unify the “query-key-value”
and “source-focus-beneficiary” mechanisms).

4 experimentos

We evaluate ATTCONV on sentence modeling in
three scenarios:
(i) Zero-context, eso es, intra-
the same input sentence acts as tx as
contexto;
well as ty; (ii) Single-context,
textual
entailment—hypothesis modeling with a single
premise as the extra-context; y (iii) Multiple-
contexto, a saber, claim verification—claim mod-
eling with multiple extra-contexts.

eso es,

4.1 Common Set-up and Common Baselines

All experiments share a common set-up. The input
is represented using 300-dimensional publicly
available Word2Vec (Mikolov et al., 2013) em-
camas; out of vocabulary embeddings are ran-
domly initialized. The architecture consists of the
following four layers in sequence: incrustar,
attentive convolution, max-pooling, and logistic
regression. The context-aware representation of
tx is forwarded to the logistic regression layer.
We use AdaGrad (Duchi et al., 2011) para entrenamiento.
Embeddings are fine-tuned during training. Hyper-
parameter values include: learning rate 0.01, hidden
tamaño 300, batch size 50, filter width 3.

All experiments are designed to explore com-
parisons in three aspects: (i) within ATTCONV,
“light” vs. “advanced”; (ii) “attentive convolution”
vs. “attentive pooling”/“attention only”; y (iii)
“attentive convolution” vs. “attentive RNN”.

Para tal fin, we always report “light” and
“advanced” ATTCONV performance and compare
against five types of common baselines: (i) w/o
contexto; (ii) w/o attention; (iii) w/o convolution:
Similar to the Transformer’s principle (Vaswani
et al., 2017), we discard the convolution oper-
ation in Equation (5) and forward the addition
of the attentive context cx
into a
fully connected layer. To keep enough parame-
ters, we stack in total four layers so that “w/o
convolution” has the same size of parameters as
light-ATTCONV; (iv) with attention: RNNs with
attention and CNNs with attentive pooling; y (v)
prior state of the art, typeset in italics.

i and the hx
i

693

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
2
4
9
1
5
6
7
6
7
6

/

/
t

yo

a
C
_
a
_
0
0
2
4
9
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

sistemas
Paragraph Vector
Lin et al. Bi-LSTM
Lin et al. CNN
MultichannelCNN (kim)
CNN+internal attention
ABCNN
APCNN
Attentive-LSTM
Lin et al. RNN Self-Att.
luz

w/o convolution

advanced

acc
58.43
61.99
62.05
64.62
61.43
61.36
61.98
63.11
64.21
66.75
61.34
67.36∗

oh
/
w

norte
oh
i
t
norte
mi
t
t
a

h
t
i

w

norte
oh
i
t
norte
mi
t
t
a

t
t
A

V
norte
oh
C

Mesa 3: System comparison of sentiment analysis on
Yelp. Significant improvements over state of the art are
marked with ∗ (test of equal proportions, pag < 0.05). 4.2 Sentence Modeling with Zero-context: Sentiment Analysis We evaluate sentiment analysis on a Yelp bench- mark released by Lin et al. (2017): review-star pairs in sizes 500K (train), 2,000 (dev), and 2,000 (test). Most text instances in this data set are long: 25%, 50%, 75% percentiles are 46, 81, and 125 words, respectively. The task is five-way classification: 1 to 5 stars. The measure is accuracy. We use this benchmark because the predominance of long texts lets us evaluate the system perfor- mance of encoding long-range context, and the system by Lin et al. is directly related to ATTCONV in intra-context scenario. Baselines. (i) w/o attention. Three baselines from Lin et al. (2017): Paragraph Vector (Le and Mikolov, 2014) (unsupervised sentence rep- resentation learning), BiLSTM, and CNN. We also reimplement MultichannelCNN (Kim, 2014), recognized as a simple but surprisingly strong sentence modeler. (ii) with attention. A vanilla “Attentive-LSTM” by Rocktäschel et al. (2016). “RNN Self-Attention” (Lin et al., 2017) is di- rectly comparable to ATTCONV: it also uses intra- context attention. “CNN+internal attention” (Adel and Schütze, 2017), an intra-context attention idea similar to, but less complicated than, Lin et al. (2017). ABCNN & APCNN – CNNs with atten- tive pooling. Results and Analysis. Table 3 shows that advanced-ATTCONV surpasses its “light” coun- terpart, and obtains significant improvement over the state of the art. Figure 5: ATTCONV vs. MultichannelCNN for lengths. groups of Yelp text with ascending text ATTCONV performs more robustly across different lengths of text. In addition, ATTCONV surpasses attentive pool- ing (ABCNN&APCNN) with a big margin (>5%)
and outperforms the representative attentive-LSTM
(>4%).

Además,

it outperforms the two self-
attentive models: CNN+internal attention (Adel
and Schütze, 2017) and RNN Self-Attention (lin
et al., 2017), which are specifically designed
for single-sentence modeling. Adel and Schütze
(2017) generate an attention weight for each CNN
hidden state by a linear transformation of the same
hidden state, then compute weighted average over
all hidden states as the text representation. lin
et al. (2017) extend that idea by generating a
group of attention weight vectors, then RNN hid-
den states are averaged by those diverse weighted
vectores, allowing extracting different aspects of
the text into multiple vector representations. Ambos
works are essentially weighted mean pooling, sim-
ilar to the attentive pooling in Yin et al. (2016) y
dos Santos et al. (2016).

Próximo, we compare ATTCONV with Multichan-
nelCNN,
the strongest baseline system (“w/o
attention”), for different length ranges to check
whether ATTCONV can really encode long-range
context effectively. We sort the 2,000 test instances
by length, then split them into 10 grupos, cada
consisting of 200 instancias. Cifra 5 shows per-
formance of ATTCONV vs. MultichannnelCNN.

We observe that ATTCONV consistently outper-
forms MultichannelCNN for all lengths. Más-
más, the improvement over MultichannelCNN
generally increases with length. This is evidence
that ATTCONV more effectively models long text.

694

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
2
4
9
1
5
6
7
6
7
6

/

/
t

yo

a
C
_
a
_
0
0
2
4
9
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

12345678910indices of sorted text groups0.580.600.620.640.660.680.70accMultichannelCNNATTCONV0.6+diff of two curves

#instancias
23,596
1,304
2,126
27,026

#entail
8,602
657
842
10,101

#neutral
14,994
647
1,284
16,925

train
desarrollador
prueba
total

Mesa 4: Statistics of SCITAIL data set.

oh
/
w

norte
oh
i
t
norte
mi
t
t
a

h
t
i

w

norte
oh
i
t
norte
mi
t
t
a

acc
sistemas
60.4
Majority Class
65.1
w/o Context
69.5
Bi-LSTM
70.6
NGram model
Bi-CNN
74.4
Enhanced LSTM 70.6
Attentive-LSTM 71.5
72.3
Decomp-Att
77.3
DGEM
75.2
APCNN
75.8
ABCNN
78.1
75.1
79.2

ATTCONV-light

w/o convolution
ATTCONV-advanced

Mesa 5: ATTCONV vs. baselines on SCITAIL.

This is likely because of ATTCONV’s capability to
encode broader context in its filters.

4.3 Sentence Modeling with a Single Context:

Textual Entailment

Data Set. SCITAIL (Khot et al., 2018) is a textual
entailment benchmark designed specifically for a
real-world task: multi-choice question answering.
All hypotheses tx were obtained by rephrasing
(pregunta, correct answer) pairs into single sen-
tenencias, and premises ty are relevant Web sentences
retrieved by an information retrieval method. Entonces
the task is to determine whether a hypothesis is
true or not, given a premise as context. Todo (tx, ty)
pairs are annotated via crowdsourcing. Accuracy
is reported. Mesa 1 shows examples and Table 4
gives statistics.

By this construction, a substantial performance
improvement on SCITAIL is equivalent to a better
QA performance (Khot et al., 2018). The hypoth-
esis tx is the target sentence, and the premise ty
acts as its context.

Líneas de base. Apart from the common baselines
(mira la sección 4.1), we include systems covered
by Khot et al. (2018): (i) n-gram Overlap: Un
overlap baseline, considering lexical granularity

695

such as unigrams, one-skip bigrams, y uno-
skip trigrams. (ii) Decomposable Attention Model
(Decomp-Att) (Parikh et al., 2016): Explore atten-
tion mechanisms to decompose the task into sub-
(iii) Enhanced LSTM
tasks to solve in parallel.
(Chen et al., 2017b): Enhance LSTM by taking
into account syntax and semantics from parsing
información.
(iv) DGEM (Khot et al., 2018): A
decomposed graph entailment model, the current
state-of-the-art.

Mesa 5 presents results on SCITAIL. (i) Within
ATTCONV, “advanced” beats “light” by 1.1%;
(ii) “w/o convolution” and attentive pooling (es decir.,
ABCNN & APCNN) get lower performances by
3%–4%; (iii) More complicated attention mech-
anisms equipped into LSTM (p.ej., “attentive-
LSTM” and “enhanced-LSTM”) perform even
worse.

Análisis de errores.

To better understand the
ATTCONV in SCITAIL, we study some error
cases listed in Table 6.

Language conventions. Pair #1 uses sequen-
tial commas (es decir., in “the egg, larva, pupa, y
adult”) or a special symbol sequence (es decir., in “egg
−> larva −> pupa −> adult”) to form a set or
secuencia; pair #2 has “A (or B)” to express the
equivalence of A and B. This challenge is expected
to be handled by DNNs with specific training signals.
En #3, “be-
cause smaller amounts of water evaporate in the
cool morning” cannot be inferred from the premise
ty directly. The main challenge in #4 is to dis-
tinguish “weight” from “force,” which requires
background physical knowledge that is beyond the
presented text here and beyond the expressivity of
word embeddings.

Knowledge beyond the text ty.

Complex discourse relation. The premise in #5
has an “or” structure. En #6, the inserted phrase
“with about 16,000 species” makes the connection
between “nonvascular plants” and “the mosses,
liverworts, and hornworts” hard to detect. Ambos
instances require the model to decode the dis-
course relation.

ATTCONV on SNLI. Mesa 7 shows the com-
parison. We observe that: (i) classifying hypothe-
ses without looking at premises,
eso es, “w/o
context” baseline, results in a large improvement
over the “majority baseline.” This verifies the
strong bias in the hypothesis construction of the
SNLI data set (Gururangan et al., 2018; Poliak
et al., 2018). (ii) ATTCONV (advanced) surpasses

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
2
4
9
1
5
6
7
6
7
6

/

/
t

yo

a
C
_
a
_
0
0
2
4
9
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

#

1

2

3

4

5

6

(Premise ty, Hypothesis tx) Pair

(ty) These insects have 4 life stages, the egg, larva, pupa, and adult.
(tx) The sequence egg −> larva −> pupa −> adult shows the life cycle
of some insects.
(ty) . . . the notochord forms the backbone (or vertebral column).
(tx) Backbone is another name for the vertebral column.
(ty) Water lawns early in the morning . . . prevent evaporation.
(tx) Watering plants and grass in the early morning is a way to conserve water
because smaller amounts of water evaporate in the cool morning.
(ty) . . . the SI unit . . . for force is the Newton (norte) and is defined as (kg·m/s−2 ).
(tx) Newton (norte) is the SI unit for weight.
(ty) Heterotrophs get energy and carbon from living plants or animals
(consumers) or from dead organic matter (decomposers).
(tx) Mushrooms get their energy from decomposing dead organisms.
(ty) . . . are a diverse assemblage of three phyla of nonvascular plants, con
acerca de 16,000 species, that includes the mosses, liverworts, and hornworts.
(tx) Moss is best classified as a nonvascular plant.

G/P Challenge

1/0

1/0

idioma
conventions

idioma
conventions

1/0

beyond text

0/1

beyond text

0/1

1/0

discourse
relation

discourse
relation

Mesa 6: Error cases of ATTCONV in SCITAIL. “. . . ": truncated text. “G/P”: gold/predicted label.

oh
/
w

norte
oh
i
t
norte
mi
t
t
a

h
t
i

w

norte
oh
i
t
norte
mi
t
t
a

0

Sistemas

#para acc
34.3
majority class
w/o context (es decir., hypothesis only) 270k 68.7
220k 77.6
Bi-LSTM (Bowman et al., 2015)
270k 80.3
Bi-CNN
3.5METRO 82.1
Tree-CNN (Mou et al., 2016)
6.3METRO 84.8
NES (Munkhdalai and Yu, 2017)
250k 83.5
Attentive-LSTM (Rocktäschel)
95METRO 84.4
Self-Attentive (Lin et al., 2017)
1.9METRO 86.1
Match-LSTM (Wang and Jiang)
3.4METRO 86.3
LSTMN (Cheng et al., 2016)
580k 86.8
Decomp-Att (Parikh)
7.7METRO 88.6
Enhanced LSTM (Chen et al., 2017b)
ABCNN (Yin et al., 2016)
834k 83.7
APCNN (dos Santos et al., 2016) 360k 83.9
360k 86.3
360k 84.9
900k 87.8
8METRO 88.7

ATTCONV – light

w/o convolution
ATTCONV – advanced
State-of-the-art (Peters et al., 2018)

volution (Cifra 6(a)) and attentive pooling
(Cifra 6(b)).

(después

(i) ei,j

in sentence tx; (ii) hx

softmax), which shows

Cifra 6(a) explores the visualization of two
kinds of features learned by light ATTCONV in
SNLI data set (most are short sentences with
in Equa-
rich phrase-level reasoning):
ción (1)
el
attention distribution over context ty by the hidden
state hx
i,new in Equation (5)
i
for i = 1, 2, · · · , |tx|;
it shows the context-
aware word features in tx. By the two visual-
ized features, we can identify which parts of the
context ty are more important for a word in sen-
tence tx, and a max-pooling, over those context-
driven word representations, selects and forwards
dominant (palabra, leftcontext, rightcontext, attcontext)
combinations to the final decision maker.

Mesa 7: Performance comparison on SNLI test. En-
semble systems are not included.

all “w/o attention” baselines and “with attention”
CNN baselines (es decir., attentive pooling), obtaining
a performance (87.8%) that is close to the state of
the art (88.7%).

We also report

the parameter size in SNLI
as most baseline systems did. Mesa 7 muestra
in comparison to these baselines, nuestro
eso,
ATTCONV (light and advanced) has a more lim-
ited number of parameters, yet its performance is
competitive.

Visualización.
En figura 6, we visualize the
attention mechanisms explored in attentive con-

Cifra 6(a) shows the features3 of sentence tx
= “A dog jumping for a Frisbee in the snow” con-
ditioned on the context ty = “An animal is out-
side in the cold weather, playing with a plastic
toy.” Observations:
(i) The right figure shows
that the attention mechanism successfully aligns
some cross-sentence phrases that are informative
to the textual entailment problem, such as “dog”
to “animal” (es decir., cx
dog ≈ “animal”), “Frisbee”
to “plastic toy” and “playing” (es decir., cx
F risbee ≈
“plastic toy”+“playing”); (ii) The left figure shows
a max-pooling over the generated features of
filter_1 and filter_2 will focus on the context-
aware phrases (A, dog, jumping, cx
dog) y (a,

3Por simplicidad, we show 2 out of 300 ATTCONV filters.

696

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
2
4
9
1
5
6
7
6
7
6

/

/
t

yo

a
C
_
a
_
0
0
2
4
9
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

(a) Visualization for features generated by ATTCONV’s filters on sentence tx and ty. A max-pooling, over filter_1, locates
the phrase (A, dog, jumping, cx
dog” (resp. cx
F ris.)—the
attentive context of “dog” (resp. “Frisbee”) in tx—mainly comes from “animal” (resp. “toy” and “playing”) in ty.

dog), and locates the phrase (a, Frisbee, en, cx

F risbee) via filter_2. “cx

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
2
4
9
1
5
6
7
6
7
6

/

/
t

yo

a
C
_
a
_
0
0
2
4
9
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

(b) Attention visualization for attentive pooling (ABCNN). Based on the words in tx and ty, primero, a convolution layer with
filter width 3 outputs hidden states for each sentence, then each hidden state will obtain an attention weight for how well this
hidden state matches towards all the hidden states in the other sentence, and finally all hidden states in each sentence will be
weighted and summed up as the sentence representation. This visualization shows that the spans “dog jumping for” and “in
the snow” in tx and the spans “animal is outside” and “in the cold” in ty are most indicative to the entailment reasoning.

Cifra 6: Attention visualization for attentive convolution (arriba) and attentive pooling (abajo) between sentence
tx = “A dog jumping for a Frisbee in the snow” (izquierda) and sentence ty = “An animal is outside in the cold weather,
playing with a plastic toy” (bien).

Frisbee, en, cx
F risbee) respectivamente; the two phrases
are crucial to the entailment reasoning for this (ty,
tx) pair.

Cifra 6(b) shows the phrase-level (es decir., cada
consecutive trigram) attentions after the convolu-
tion operation. Como figura 3 muestra, a subsequent
pooling step will weight and sum up those phrase-
level hidden states as an overall sentence represen-
tation. So, even though some phrases such as “in

the snow” in tx and “in the cold” in ty show im-
portance in this pair instance, the final sentence
representation still (i) lacks a fine-grained phrase-
to-phrase reasoning, y (ii) underestimates some
indicative phrases such as “A dog” in tx and “An
animal” in ty.

Briefly, attentive convolution first performs
phrase-to-phrase,
entonces
composes features; attentive pooling composes

inter-sentence reasoning,

697

Adogforainthe.snowjumpingFrisbeecxdogcxFris.Anincold,isthewithtoy.aanimaloutsideweatherplayingplastictxtyAdogforainthe.snowjumpingFrisbeeAnincold,isthewithtoy.aanimaloutsideweatherplayingplastictxtyconvolutionoutput (filter width=3)

#SUPPORTED #REFUTED
29,775
3,333
3,333

80,035
3,333
3,333

#NEI
35,639
3,333
3,333

train
desarrollador
prueba

Mesa 8: Statistics of claims in the FEVER data set.

phrase features as sentence representations, entonces
Intuitivamente, attentive convo-
performs reasoning.
lution better fits the way humans conduct entail-
ment reasoning, and our experiments validate its
superiority—it is the hidden states of the aligned
phrases rather than their matching scores that support
better representation learning and decision-making.

The comparisons in both SCITAIL and SNLI

muestra esa:

• CNNs with attentive

convolution (es decir.,
ATTCONV) outperform the CNNs with at-
tentive pooling (es decir., ABCNN and APCNN);
• Some competitors got over-tuned on SNLI
while demonstrating mediocre performance
in SCITAIL—a real-world NLP task. Our sys-
tem ATTCONV shows its robustness in both
benchmark data sets.

4.4 Sentence Modeling with Multiple Contexts:

Claim Verification

Data Set. For this task, we use FEVER (Thorne
et al., 2018); it infers the truthfulness of claims by
extracted evidence. The claims in FEVER were
manually constructed from the introductory sec-
tions of about 50K popular Wikipedia articles in
the June 2017 dump. Claims have 9.4 tokens on
promedio. Mesa 8 lists the claim statistics.

In addition to claims, FEVER also provides a
Wikipedia corpus of approximately 5.4 million ar-
ticles, from which gold evidences are gathered and
provided. Cifra 7 shows the distributions of sen-
tence sizes in FEVER’s ground truth evidence set
(es decir., the context size in our experimental set-up).
We can see that roughly 28% of evidence instances
cover more than one sentence and roughly 16%
cover more than two sentences.

Each claim is labeled as SUPPORTED, RE-
FUTED, or NOTENOUGHINFO (NEI) given the
gold evidence. The standard FEVER task also
explores the performance of evidence extraction,
evaluated by F1 between extracted evidence and
gold evidence. This work focuses on the claim en-
tailment part, assuming the evidences are provided
(extracted or gold). More specifically, we treat a
claim as tx, and its evidence sentences as context ty.

Cifra 7: Distribution of #sentence in FEVER evi-
dencia.

This task has two evaluations:

(i) ALL—
accuracy of claim verification regardless of the
validness of evidence; (ii) SUBSET—verification
accuracy of a subset of claims, in which the gold
evidence for SUPPORTED and REFUTED claims
must be fully retrieved. We use the official eval-
uation toolkit.4

Set-ups.
(i) We adopt the same retrieved evi-
dence set (es decir, contexts ty) as Thorne et al. (2018):
top-5 most relevant sentences from top-5 retrieved
wiki pages by a document retriever (Chen et al.,
2017a). The quality of this evidence set against the
ground truth is: 44.22 (recordar), 10.44 (precisión),
16.89 (F1) on dev, y 45.89 (recordar), 10.79 (pre-
decisión), 17.47 (F1) on test. This set-up challenges
our system with potentially unrelated or even mis-
leading context. (ii) We use the ground truth evi-
dence as context. This lets us determine how far
our ATTCONV can go for this claim verification
problem once the accurate evidence is given.

Líneas de base. We first include the two systems ex-
plored by Thorne et al. (2018): (i) MLP: A multi-
layer perceptron baseline with a single hidden
capa, based on tf-idf cosine similarity between the
claim and the evidence (Riedel et al., 2017); (ii)
Decomp-Att (Parikh et al., 2016): A decompos-
able attention model that is tested in SCITAIL and
SNLI before. Note that both baselines first relied
on an information retrieval system to extract the
top-5 relevant sentences from the retrieved top-5
wiki pages as evidence for claims, then concate-
nated all evidence sentences as a longer context
for a claim.

4https://github.com/sheffieldnlp/fever-

scorer.

698

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
2
4
9
1
5
6
7
6
7
6

/

/
t

yo

a
C
_
a
_
0
0
2
4
9
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

12345678910>10#context for each claim sentence051015…%12.134.072.852.901.760.980.680.490.401.8571.88

sistema

MLP
Bi-CNN
APCNN
ABCNN
Attentive-LSTM
Decomp-Att
ATTCONV

v
mi
d

retrie. evi.
oro
evi.
ALL SUB
41.86 19.04 65.13
47.82 26.99 75.02
50.75 30.24 78.91
51.39 32.44 77.13
52.47 33.19 78.44
52.09 32.57 80.82

luz,context-wise
w/o conv.
luz,context-conc
w/o conv.

57.78 34.29 83.20
47.29 25.94 73.18
59.31 37.75 84.74
48.02 26.67 73.44
advan.,context-wise 60.20 37.94 84.99
advan.,context-conc 62.26 39.44 86.02

t (Thorne et al., 2018)

s
mi
t

ATTCONV

50.91 31.87
61.03 38.77 84.61

Mesa 9: Performance on dev and test of FEVER. En
“gold evi.” scenario, ALL SUBSET are the same.

Nosotros

then consider

two variants of our
tx
ATTCONV in dealing with modeling of
with variable-size context ty. (i) Context-wise:
we first use all evidence sentences one by one as
context ty to guide the representation learning of
the claim tx, generating a group of context-aware
representation vectors for the claim,
then we
do element-wise max-pooling over this vector
group as the final representation of the claim. (ii)
Context-conc: concatenate all evidence sentences
as a single piece of context,
el
claim based on this context. This is the same
preprocessing step as Thorne et al. (2018) did.

then model

Resultados. Mesa 9 compares our ATTCONV in dif-
ferent set-ups against the baselines. Primero, ATTCONV
surpasses
the top competitor “Decomp-Att,"
reported in Thorne et al. (2018), with big mar-
gins in dev (ALL: 62.26 vs.
52.09) and test
(ALL: 61.03 vs. 50.91). Además, “advanced-
ATTCONV” consistently outperforms its “light”
counterpart. Además, ATTCONV surpasses at-
tentive pooling (es decir., ABCNN & APCNN) y
“attentive-LSTM” by >10% in ALL, >6% in SUB
and >8% in “gold evi.”

Cifra 8 further explores the fine-grained per-
formance of ATTCONV for different sizes of gold
evidencia (es decir., different sizes of context ty). El
system shows comparable performances for sizes
1 y 2. Even for context sizes larger than 5, él
only drops by 5%.

699

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
2
4
9
1
5
6
7
6
7
6

/

/
t

yo

a
C
_
a
_
0
0
2
4
9
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Cifra 8: Fine-grained ATTCONV performance given
variable-size golden FEVER evidence as claim’s con-
texto.

These experiments on claim verification clearly
show the effectiveness of ATTCONV in sen-
tence modeling with variable-size context. Este
should be attributed to the attention mechanism in
ATTCONV, which enables a word or a phrase in
the claim tx to “see” and accumulate all related
clues even if those clues are scattered across mul-
tiple contexts ty.

Análisis de errores. We do error analysis for “re-
trieved evidence” scenario.

Error case #1 is due to the failure of fully re-
trieving all evidence. Por ejemplo, a successful
support of the claim “Weekly Idol has a host born
in the year 1978” requires the information compo-
sition from three evidence sentences, two from the
wiki article “Weekly Idol,” and one from “Jeong
Hyeong-don.” However, only one of them is
retrieved in the top-5 candidates. Our system pre-
dicts REFUTED. This error is more common in
instances for which no evidence is retrieved.

Error case #2 is due to the insufficiency of rep-
resentation learning. Consider the wrong claim
in REFUTED
“Corsica belongs to Italy” (es decir.,
class). Even though good evidence is retrieved, el
system is misled by noise evidence: “It is located
. . . west of the Italian Peninsula, with the nearest
land mass being the Italian island . . . ".

Error case #3 is due to the lack of advanced data
preprocessing. For a human, it is very easy to “re-
fute” the claim “Telemundo is an English-language
television network” by the evidence “Telemundo
is an American Spanish-language terrestrial tele-
visión . . . " (from the “Telemundo” wikipage), por
checking the keyphrases: “Spanish-language” vs.
“English-language.” Unfortunately, both tokens
are unknown words in our system; como resultado,

12345>5gold #context for each claim81.582.082.583.083.584.084.585.0acc (%)

they do not have informative embeddings. A more
careful data preprocessing is expected to help.

5 Summary

We presented ATTCONV, the first work that en-
ables CNNs to acquire the attention mechanism
commonly used in RNNs. ATTCONV combines
the strengths of CNNs with the strengths of the
RNN attention mechanism. Por un lado,
it makes broad and rich context available for
predicción, either context from external
inputs
(extra-context) or internal inputs (intra-context).
Por otro lado, it can take full advantage of
the strengths of convolution:
It is more order-
sensitive than attention in RNNs and local-context
information can be powerfully and efficiently
modeled through convolution filters. Our experi-
ments demonstrate the effectiveness and flexibil-
ity of ATTCONV when modeling sentences with
variable-size context.

Expresiones de gratitud

este
We gratefully acknowledge funding for
work by the European Research Council (ERC
#740516). We would like to thank the anonymous
reviewers for their helpful comments.

Referencias

Heike Adel and Hinrich Schütze. 2017. Exploring
different dimensions of attention for uncertainty
detección. In Proceedings of EACL, pages 22–34,
Valencia, España.

Dzmitry Bahdanau, Kyunghyun Cho, y yoshua
bengio. 2015. Traducción automática neuronal por
aprender juntos a alinear y traducir. En profesional-
ceedings of ICLR, San Diego, EE.UU.

Samuel R. Bowman, Gabor Angeli, Christopher
Potts, and Christopher D. Manning. 2015. A
large annotated corpus for learning natural lan-
guage inference. In Proceedings of EMNLP,
pages 632–642, Lisbon, Portugal.

Danqi Chen, Adam Fisch, Jason Weston, y
Antonio Bordes. 2017a. Reading Wikipedia to
answer open-domain questions. En procedimientos
of ACL, pages 1870–1879, vancouver, Canada.

Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei,
Hui Jiang, and Diana Inkpen. 2017b. Enhanced

LSTM for natural language inference. En profesional-
ceedings of ACL, pages 1657–1668, vancouver,
Canada.

Jianpeng Cheng, Li Dong, and Mirella Lapata.
2016. Long short-term memory-networks for
In Proceedings of EMNLP,
machine reading.
pages 551–561, austin, EE.UU.

Ronan Collobert, Jason Weston, Léon Bottou,
Michael Karlen, Koray Kavukcuoglu, and Pavel
PAG. Kuksa. 2011. Natural language processing
(almost) from scratch. Journal of Machine
Investigación del aprendizaje, 12:2493–2537.

Ido Dagan, Dan Roth, Mark Sammons, y
Fabio Massimo Zanzotto. 2013. Recognizing
Textual Entailment: Models and Applications.
Synthesis Lectures on Human Language Tech-
nológico. morgan & Claypool.

John Duchi, Elad Hazan, and Yoram Singer. 2011.
Adaptive subgradient methods for online learn-
ing and stochastic optimization. Journal of Ma-
chine Learning Research, 12:2121–2159.

Jeffrey L. elman. 1990. Finding structure in time.

Ciencia cognitiva, 14(2):179–211.

Jonas Gehring, Michael Auli, David Grangier,
Denis Yarats, and Yann N. Dauphin. 2017.
Convolutional sequence to sequence learning.
In Proceedings of ICML, pages 1243–1252,
Sídney, Australia.

Alex Graves. 2013. Generating sequences with re-
current neural networks. CORR, abs/1308.0850.

Alex Graves, Greg Wayne, and Ivo Danihelka.

2014. Neural turing machines. CORR, abs/1410.5401.

Suchin Gururangan, Swabha Swayamdipta, Omer
Exacción, Roy Schwartz, Samuel R. Bowman, y
Noah A. Herrero. 2018. Annotation artifacts in
natural language inference data. En procedimientos
of NAACL-HLT, pages 107–112, Nueva Orleans,
EE.UU.

Karl Moritz Hermann, Tomás Kociský, Eduardo
Grefenstette, Lasse Espeholt, Will Kay, Mustafa
Suleyman, and Phil Blunsom. 2015. Enseñar-
ing machines to read and comprehend. En profesional-
ceedings of NIPS, pages 1693–1701, Montréal,
Canada.

700

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
2
4
9
1
5
6
7
6
7
6

/

/
t

yo

a
C
_
a
_
0
0
2
4
9
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Nal Kalchbrenner, Edward Grefenstette, and Phil
Blunsom. 2014. A convolutional neural net-
work for modelling sentences. En procedimientos
of ACL, pages 655–665, baltimore, EE.UU.

Tushar Khot, Ashish Sabharwal, y peter clark.
2018. SciTaiL: A textual entailment dataset
from science question answering. En curso-
ings of AAAI, pages 5189–5197, Nueva Orleans,
EE.UU.

Yoon Kim. 2014. Convolutional neural networks
for sentence classification. En procedimientos de
EMNLP, pages 1746–1751, Doha, Qatar.

Yoon Kim, Carl Denton, Luong Hoang, y
Alejandro M.. Rush. 2017. Structured atten-
tion networks. In Proceedings of ICLR, Toulon,
Francia.

Ankit Kumar, Ozan Irsoy, Peter Ondruska,
Mohit Iyyer, James Bradbury, Ishaan Gulrajani,
Victor Zhong, Romain Paulus, y ricardo
Socher. 2016. Ask me anything: Dynamic
memory networks for natural language process-
En g. In Proceedings of ICML, pages 1378–1387,
Nueva York, EE.UU.

Quoc Le and Tomas Mikolov. 2014. Distributed
representations of sentences and documents.
In Proceedings of ICML, pages 1188–1196,
Beijing, Porcelana.

Yann LeCun, Léon Bottou, Yoshua Bengio, y
Patrick Haffner. 1998. Gradient-based learning
applied to document recognition. Actas
of the IEEE, 86(11):2278–2324.

Jiwei Li, Minh-Thang Luong, and Dan Jurafsky.
2015. A hierarchical neural autoencoder for
paragraphs and documents. En procedimientos de
LCA, pages 1106–1115, Beijing, Porcelana.

Jindrich Libovický and Jindrich Helcl. 2017. En-
tention strategies for multi-source sequence-
to-sequence learning. In Proceedings of ACL,
pages 196–202, vancouver, Canada.

Zhouhan Lin, Minwei Feng, Cícero Nogueira dos
Santos, Mo Yu, Bing Xiang, Bowen Zhou,
and Yoshua Bengio. 2017. A structured self-
attentive sentence embedding. En procedimientos
of ICLR, Toulon, Francia.

Minh-Thang Luong, Hieu Pham, and Christopher
D. Manning. 2015. Effective approaches to
attention-based neural machine translation. En
Proceedings of EMNLP, pages 1412–1421,
Lisbon, Portugal.

Yishu Miao, Lei Yu, and Phil Blunsom. 2016.
Neural variational inference for text processing.
In Proceedings of ICML, pages 1727–1736,
Nueva York, EE.UU.

Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory
S. Corrado, and Jeffrey Dean. 2013. Dis-
tributed representations of words and phrases
and their compositionality. En procedimientos de
NIPS, pages 3111–3119, Lake Tahoe, EE.UU.

Lili Mou, Rui Men, Ge Li, Yan Xu, Lu Zhang, Rui
yan, and Zhi Jin. 2016. Natural language in-
ference by tree-based convolution and heuristic
matching. In Proceedings of ACL, pages 130–136,
Berlina, Alemania.

Tsendsuren Munkhdalai and Hong Yu. 2017.
En procedimientos de

Neural semantic encoders.
EACL, pages 397–407, Valencia, España.

Ramesh Nallapati, Bowen Zhou, Cícero Nogueira
dos Santos, Çaglar Gülçehre, and Bing Xiang.
2016. Abstractive text summarization using
sequence-to-sequence rnns and beyond. En profesional-
ceedings of CoNLL, pages 280–290, Berlina,
Alemania.

Ankur P. Parikh, Oscar Täckström, Dipanjan Das,
y Jakob Uszkoreit. 2016. A decomposable
attention model for natural language inference.
In Proceedings of EMNLP, pages 2249–2255,
austin, EE.UU.

Jeffrey Pennington, Richard Socher, and Christopher
D. Manning. 2014. GloVe: Global vectors for
word representation. In Proceedings of EMNLP,
pages 1532–1543, Doha, Qatar.

Matthew E. Peters, Mark Neumann, Mohit Iyyer,
Matt Gardner, Christopher Clark, Kenton Lee,
and Luke Zettlemoyer. 2018. Deep contextu-
alized word representations. En procedimientos de
NAACL-HLT, pages 2227–2237, Nueva Orleans,
EE.UU.

Adam Poliak, Jason Naradowsky, Aparajita Haldar,
Rachel Rudinger, and Benjamin Van Durme.
2018. Hypothesis only baselines in natural

701

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
2
4
9
1
5
6
7
6
7
6

/

/
t

yo

a
C
_
a
_
0
0
2
4
9
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

language inference. In Proceedings of *SEM,
pages 180–191, Nueva Orleans, EE.UU.

Benjamin Riedel, Isabelle Augenstein, Georgios P.
Spithourakis, and Sebastian Riedel. 2017. A
simple but tough-to-beat baseline for the fake
news challenge stance detection task. CORR,
abs/1707.03264.

Tim Rocktäschel, Edward Grefenstette, Karl
Moritz Hermann, Tomáš Koˇcisk`y, and Phil
Blunsom. 2016. Reasoning about entailment
with neural attention. In Proceedings of ICLR,
San Juan, Puerto Rico.

Cícero Nogueira dos Santos, Ming Tan, Bing
Xiang, and Bowen Zhou. 2016. Attentive pool-
ing networks. CORR, abs/1602.03609.

Min Joon Seo, Aniruddha Kembhavi, Ali Farhadi,
and Hannaneh Hajishirzi. 2017. Bidirectional
attention flow for machine comprehension. En
Proceedings of ICLR, Toulon, Francia.

Lifeng Shang, Zhengdong Lu, and Hang Li.
2015. Neural responding machine for short-
In Proceedings of ACL,
text conversation.
pages 1577–1586, Beijing, Porcelana.

Rupesh Kumar Srivastava, Klaus Greff, y
Jürgen Schmidhuber. 2015. Training very
In Proceedings of NIPS,
deep networks.
pages 2377–2385, Montréal, Canada.

James Thorn, Andreas Vlachos, Christos
Christodoulopoulos, and Arpit Mittal. 2018.
FEVER: A large-scale dataset for fact extraction
and verification. In Proceedings of NAACL-
HLT, pages 809–819, Nueva Orleans, EE.UU.

Ashish Vaswani, Noam Shazeer, Niki Parmar,
Jakob Uszkoreit, Leon Jones, Aidan N.. Gómez,

Lukasz Kaiser, y Illia Polosukhin. 2017. En-
La atención es todo lo que necesitas.. In Proceedings of NIPS,
pages 6000–6010, Long Beach, EE.UU.

Shuohang Wang and Jing Jiang. 2016. Learn-
ing natural language inference with LSTM. En
Proceedings of NAACL-HLT, pages 1442–1451,
San Diego, EE.UU.

Shuohang Wang and Jing Jiang. 2017. Machine
comprehension using match-LSTM and an-
swer pointer. In Proceedings of ICLR, Toulon,
Francia.

Wenhui Wang, Nan Yang, Furu Wei, Baobao
Chang, y Ming Zhou. 2017a. Gated self-
matching networks for reading comprehension
and question answering. In Proceedings of ACL,
pages 189–198, vancouver, Canada.

Zhiguo Wang, Wael Hamza, and Radu Florian.
2017b. Bilateral multi-perspective matching for
natural language sentences. En procedimientos de
IJCAI, pages 4144–4150, Melbourne, Australia.

Caiming Xiong, Stephen Merity, y ricardo
Socher. 2016. Dynamic memory networks for
visual and textual question answering. En profesional-
ceedings of ICML, pages 2397–2406, Nueva York
City, EE.UU.

Caiming Xiong, Victor Zhong, y ricardo
Socher. 2017. Dynamic coattention networks for
question answering. In Proceedings of ICLR,
Toulon, Francia.

Wenpeng Yin, Hinrich Schütze, Bing Xiang, y
Bowen Zhou. 2016. ABCNN: Attention-based
convolutional neural network for modeling sen-
tence pairs. TACL, 4:259–272.

702

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
2
4
9
1
5
6
7
6
7
6

/

/
t

yo

a
C
_
a
_
0
0
2
4
9
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3
Descargar PDF