oLMpics: sobre qué captura el preentrenamiento del modelo de lenguaje

oLMpics: sobre qué captura el preentrenamiento del modelo de lenguaje

Alon Talmor1,2 Yanai Elazar1,3 Yoav Goldberg1,3

Jonathan Berant1,2

1The Allen Institute for AI
2Tel-Aviv University
3Bar-Ilan University
{alontalmor@mail,joberant@cs}.tau.ac.il
{yanaiela,yoav.goldberg}@gmail.com

Abstracto

Recent success of pre-trained language models
(LMs) has spurred widespread interest in the
language capabilities that they possess. Cómo-
alguna vez, efforts to understand whether LM repre-
sentations are useful for symbolic reasoning
tasks have been limited and scattered. En esto
trabajar, we propose eight reasoning tasks, cual
conceptually require operations such as com-
parison, conjunction, and composition. A fun-
damental challenge is to understand whether
the performance of a LM on a task should
be attributed to the pre-trained representations
or to the process of fine-tuning on the task
datos. To address this, we propose an eval-
uation protocol that includes both zero-shot
evaluación (no fine-tuning), as well as com-
paring the learning curve of a fine-tuned LM
to the learning curve of multiple controls,
which paints a rich picture of the LM capabil-
ities. Our main findings are that: (a) diferente
LMs exhibit qualitatively different reasoning
abilities, p.ej., ROBERTA succeeds in reason-
ing tasks where BERT fails completely; (b)
LMs do not reason in an abstract manner and
are context-dependent, p.ej., while ROBERTA
can compare ages, it can do so only when the
ages are in the typical range of human ages;
(C) On half of our reasoning tasks all models
fail completely. Our findings and infrastruc-
ture can help future work on designing new
conjuntos de datos, modelos, and objective functions for
pre-training.

1 Introducción

Large pre-trained language models (LMs) tener
revolutionized the field of natural language pro-
cessing in the last few years (Dai and Le, 2015;
Peters et al., 2018a; Yang et al., 2019; Radford
et al., 2019; Devlin et al., 2019). This has insti-

743

gated research exploring what is captured by the
contextualized representations that
these LMs
compute, revealing that they encode substantial
amounts of syntax and semantics (Linzen et al.,
2016b; Tenney et al., 2019b, a; Shwartz and
Dagan, 2019; Lin et al., 2019; Coenen et al.,
2019).

Despite these efforts, it remains unclear what
symbolic reasoning capabilities are difficult to
learn from an LM objective only. en este documento, nosotros
propose a diverse set of probing tasks for types of
symbolic reasoning that are potentially difficult to
capture using a LM objective (ver tabla 1). Nuestro
intuition is that because a LM objective focuses
on word co-occurrence, it will struggle with tasks
that are considered to involve symbolic reasoning
such as determining whether a conjunction of
properties is held by an object, and comparing
the sizes of different objects. Understanding what
is missing from current LMs may help design
datasets and objectives that will endow models
with the missing capabilities.

Sin embargo, how does one verify whether pre-
trained representations hold information that is
useful for a particular task? Past work mostly
resorted to fixing the representations and fine-
tuning a simple, often linear, randomly initialized
probe, to determine whether the representations
hold relevant information (Ettinger et al., 2016;
Adi et al., 2016; Belinkov and Glass, 2019;
Hewitt and Manning, 2019; Wallace et al.,
2019; Rozen et al., 2019; Peters et al., 2018b;
Warstadt et al., 2019). Sin embargo, it is difficult to
determine whether success is due to the pre-trained
representations or due to fine-tuning itself (Hewitt
and Liang, 2019). To handle this challenge, nosotros
include multiple controls that improve our under-
standing of the results.

Our ‘‘purest’’ setup is zero-shot: We cast tasks
in the masked LM format, and use a pre-trained
LM without any fine-tuning. Por ejemplo, given

Transacciones de la Asociación de Lingüística Computacional, volumen. 8, páginas. 743–758, 2020. https://doi.org/10.1162/tacl a 00342
Editor de acciones: Hinrich Schütze. Lote de envío: 2/2020; Lote de revisión: 7/2020; Publicado 12/2020.
C(cid:13) 2020 Asociación de Lingüística Computacional. Distribuido bajo CC-BY 4.0 licencia.

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
3
4
2
1
9
2
3
7
1
6

/

/
t

yo

a
C
_
a
_
0
0
3
4
2
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

We introduce eight tasks that test different types
of reasoning, as shown in Table 1.1 We run
experiments using several pre-trained LMs, based
on BERT (Devlin et al., 2019) and ROBERTA
(Liu et al., 2019). We find that there are clear
qualitative differences between different LMs
with similar architecture. Por ejemplo, ROBERTA-
LARGE (ROBERTA-L) can perfectly solve some
reasoning tasks, such as comparing numbers,
even in a zero-shot setup, whereas other models’
performance is close to random. Sin embargo, bien
performance is highly context-dependent. Specifi-
cally, we repeatedly observe that even when a
model solves a task, small changes to the input
quickly derail it to low performance. Por ejemplo,
ROBERTA-L can almost perfectly compare peo-
ple’s ages, when the numeric values are in the
expected range (15–105), but miserably fails if
the values are outside this range. Curiosamente, él
is able to reliably answer when ages are specified
through the birth year in the range 1920–2000.
This highlights that the LMs’ ability to solve
this task is strongly tied to the specific values
and linguistic context and does not generalize to
arbitrary scenarios. Last, we find that in four out
of eight tasks, all LMs perform poorly compared
with the controls.

Our contributions are summarized as follows:

• A set of probes that test whether specific
reasoning skills are captured by pre-trained
LMs.

• An evaluation protocol for understanding
whether a capability is encoded in pre-trained
representations or is learned during fine-
tuning.

• An analysis of skills that current LMs
possess. We find that LMs with similar
architectures are qualitatively different, eso
their success is context-dependent, y eso
often all LMs fail.

• Code and infrastructure for designing and
testing new probes on a large set of pre-
trained LMs. The code and models are avail-
able at http://github.com/alontalmor
/oLMpics.

1Average human accuracy was evaluated by two of the
inter-annotator agreement accuracy was

autores. En general
92%.

Cifra 1: Overview of our experimental design. Two
probes are evaluated using learning curves (incluido
zero-shot). ROBERTA-L’s (red squares, upper text in
negro) accuracy is compared with a NO LANGUAGE
(NO LANG.) control (red circles, lower text in black),
and MLM-BASELINE, which is not pre-trained (verde
triangles). Aquí, we conclude that the LM representa-
tions are well-suited for task A, whereas in task B the
model is adapting to the task during fine-tuning.

the statement ‘‘A cat is [MASK] than a mouse’’,
an LM can decide if the probability of ‘‘larger’’ is
higher than ‘‘smaller’’ for the a masked word
(Cifra 1). If a model succeeds without pre-
training over many pairs of objects,
then its
representations are useful for this task. Sin embargo,
if it fails, it could be due to a mismatch between
the language it was pre-trained on and the
language of the probing task (which might be
automatically generated, containing grammatical
errores). De este modo, we also compute the learning curve
(Cifra 1), by fine-tuning with increasing amounts
of data on the already pre-trained masked language
modelado (MLM) output ‘‘head’’, a 1-hidden
layer multilayer perceptron (MLP) on top of the
model’s contextualized representations. A model
that adapts from fewer examples arguably has
better representations for it.

Además, to diagnose whether model perfor-
mance is related to pre-training or fine-tuning, nosotros
add controls to every experiment (Figures 1, 2).
Primero, we add a control that makes minimal use
of language tokens, eso es, ‘‘cat [MASK] mouse’’
(NO LANG. En figura 1). If a model succeeds given
minimal use of language, the performance can
be mostly attributed to fine-tuning rather than to
the pre-trained language representations. Similar
logic is used to compare against baselines that
are not pre-trained (except for non-contextualized
word embeddings). En general, our setup provides a
rich picture of whether LM representations help
in solving a wide range of tasks.

744

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
3
4
2
1
9
2
3
7
1
6

/

/
t

yo

a
C
_
a
_
0
0
3
4
2
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Ejemplo

Setup
MC-MLM A chicken [MASK] has horns. A. never B. rarely C. sometimes D. often E. always
MC-MLM A 21 year old person is [MASK] than me in age, If I am a 35 year old person. A. younger B. older
MC-MLM The size of a airplane is [MASK] than the size of a house . A. larger B. smaller
MC-MLM It was [MASK] hot, it was really cold . A. not B. really
MC-QA

Probe name
ALWAYS-NEVER
AGE COMPARISON
OBJECTS COMPARISON
ANTONYM NEGATION
What is usually located at hand and used for writing? A. pen B. spoon C. computer
PROPERTY CONJUNCTION
TAXONOMY CONJUNCTION MC-MLM A ferry and a floatplane are both a type of [MASK]. A. vehicle B. airplane C. boat
When did the band where Junior Cony played first form? A. 1978 B. 1977 C. 1980
ENCYC. COMPOSITION
MULTI-HOP COMPOSITION MC-MLM When comparing a 23, a 38 y un 31 year old, el [MASK] is oldest A. second B. first C. tercero

MC-QA

Human1
91%
100%
100%
90%
92%
85%
85%
100%

Mesa 1: Examples for our reasoning probes. We use two types of experimental setups, explained in
§2. A. is the correct answer.

2 Modelos

We now turn to the architectures and loss functions
used throughout the different probing tasks.

2.1 Pre-trained Language Models

All models in this paper take a sequence of tokens
x = (x1, . . . , xn), and compute contextualized
representations with a pre-trained LM, eso es,
h = ENCODE(X) = (h1, . . . , hn). Específicamente,
we consider: (a) BERT (Devlin et al., 2019), a pre-
trained LM built using the Transformer (Vaswani
et al., 2017) architecture, which consists of a
stack of Transformer layers, where each layer
includes a multi-head attention sublayer and a
feed-forward sub-layer. BERT is trained on large
corpora using the MLM, eso es, the model is
trained to predict words that are masked from
the input; including BERT-WHOLE-WORD-MASKING
(BERT-WWM), which was trained using whole-
word-masking; (b) ROBERTA (Liu et al., 2019),
which has the same architecture as BERT, but was
trained on 10x more data and optimized carefully.

2.2 Probing Setups

We probe the pre-trained LMs using two setups:
multichoice MLM (MC-MLM) and multichoice
question answering (MC-QA). The default setup
is MC-MLM, used for tasks where the answer set
is small, consistent across the different questions,
and each answer appears as a single item in the
word-piece vocabulary.2 The MC-QA setup is
used when the answer set substantially varies
between questions, and many of the answers have
more than one word piece.

2Vocabularies of LMs such as BERT and ROBERTA
contain word-pieces, which are sub-word units that are
frequent in the training corpus. For details see Sennrich
et al. (2016).

MC-MLM Here, we convert the MLM setup
to a multichoice setup (MC-MLM). Specifi-
cally, the input to the LM is the sequence x =
([CLS], . . . , xi−1, [MASK], xi+1, . . . , [SEP]),
where a single token xi is masked. Entonces, the con-
textualized representation hi is passed through a
MC-MLM head where V is the vocabulary, y
F FMLM is a 1-hidden layer MLP:

l = F FMLM(hi) ∈ R|V|, p = softmax(m ⊕ l),
where ⊕ is element-wise addition and m ∈
{0, −∞}|V| is a mask that guarantees that the
support of the probability distribution will be
over exactly K ∈ {2, 3, 4, 5} candidate tokens:
the correct one and K − 1 distractors. Capacitación
minimizes cross-entropy loss given the gold
masked token. An input, p.ej., ‘‘[CLS] Cats
[MASK] drink coffee [SEP]'', is passed through
el modelo, the contextualized representation of
the masked token is passed through the MC-
MLM head, and the final distribution is over the
vocabulary words ‘‘always’’, ‘‘sometimes’’, y
‘‘never’’, where the gold token is ‘‘never’’, en esto
caso.

A compelling advantage of this setup, is that
reasonable performance can be obtained without
training, using the original LM representations
and the already pre-trained MLM head weights
(Petroni et al., 2019).

MC-QA Constructing a MC-MLM probe limits
the answer candidates to a single token from the
word-piece vocabulary. To relax this we use in two
tasks the standard setup for answering multichoice
questions with pre-trained LMs (Talmor et al.,
2019; Mihaylov et al., 2018). Given a question q
and candidate answers a1, . . . , aK, we compute
for each candidate answer ak representations h(k)
from the input tokens ‘‘[CLS] q [SEP] ak
[SEP]''. Then the probability over answers is
obtained using the multichoice QA head:
yo(k) = F FQA(h(k)

1 ), p = softmax(yo(1), . . . , yo(k)),

745

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
3
4
2
1
9
2
3
7
1
6

/

/
t

yo

a
C
_
a
_
0
0
3
4
2
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

where F FQA is a 1-hidden layer MLP that is
run over the [CLS] (primero) token of an answer
candidate and outputs a single logit. Tenga en cuenta que
in this setup that parameters of F FQA cannot be
initialized using the original pre-trained LM.

2.3 Baseline Models

To provide a lower bound on the performance
of pre-trained LMs, we introduce two baseline
models with only non-contextualized representa-
ciones.

MLM-BASELINE This serves as a lower-bound
for the MC-MLM setup. The input to F FMLM(·)
is the hidden representation h ∈ R1024 (for large
modelos). To obtain a similar architecture with non-
contextualized representations, we concatenate
la primera 20 tokens of each example, representing
each token with a 50-dimensional GLOVE vector
(Pennington et al., 2014), and pass this 1000-
dimensional representation of the input through
F FMLM, exactly like in MC-MLM. In all probes,
phrases are limited to 20 tokens. If there are less
than 20 tokens in the input, we zero-pad the input.

MC-QA Baseline This serves as a lower-bound
for MC-QA. We use the ESIM architecture over
GLOVE representations, which is known to provide
a strong model when the input
is a pair of
text fragments (Chen et al., 2017). We adapt
the architecture to the multichoice setup using
the procedure proposed by Zellers et al. (2018).
Each phrase and candidate answer are passed
as a list of token ‘[CLS] phrase [SEP]
respuesta [SEP]’ to the LM. The contextualized
representation of the [CLS] token is linearly
projected to a single logit. The logits for candidate
answers are passed through a softmax layer to
obtain probabilities, and the argmax is selected as
the model prediction.

resentations and what was learned during fine-
tuning. De este modo, ideally, one should test LMs using
the pre-trainedweights without fine-tuning (Linzen
et al., 2016a; Goldberg, 2019). The MC-MLM set-
arriba, which uses a pre-trained MLM head, achieves
exactly that. One only needs to design the task
as a statement with a single masked token and
K possible output tokens. Por ejemplo, in AGE-
COMPARE, we chose the phrasing ‘‘A AGE-1 year
old person is [MASK] than me in age, If I am
a AGE-2 year old person.’’, where AGE-1 and
AGE-2 are replaced with different integers, y
possible answers are ‘‘younger’’ and ‘‘older’’.
De lo contrario, no training is needed, and the original
representations are tested.

Figure 2A provides an example of such zero-
shot evaluation. Different values are assigned to
AGE-1 and AGE-2, and the pixel is colored
when the model predicts ‘‘younger’’. Accuracy
(acc.) is measured as the proportion of cases when
the model output is correct. The performance
of BERT-WWM, is on the left (azul), y de
ROBERTA-L on the right (verde). The results in
Figure 2A and Table 2 show that ROBERTA-L
compares numbers correctly (98% acc.), BERT-
WWM achieves higher than random acc. (70%
acc.), while BERT-L is random (50% acc.). El
performance of MLM-BASELINE is also random, como
the MLPMLM weights are randomly initialized.

We note that picking the statement for each
task was done through manual experimentation.
We tried multiple phrasings (Jiang et al., 2019)
and chose the one that achieves highest average
zero-shot accuracy across all tested LMs.

A case in point
De este modo, if a model performs well, one can infer
that it has the tested reasoning skill. Sin embargo,
failure does not entail that the reasoning skill is
desaparecido, as it is possible that there is a problem
with the lexical-syntactic construction we picked.

3 Controlled Experiments

3.2 Learning Curves

We now describe the experimental design and
controls used to interpret the results. We use the
AGE-COMPARE task as a running example, dónde
models need to compare the numeric value of
siglos.

3.1 Zero-shot Experiments with MC-MLM

Fine-tuning pre-trained LMs makes it hard to
disentangle what is captured by the original rep-

Despite the advantages of zero-shot evaluation,
performance of a model might be adversely
affected by mismatches between the language the
pre-trained LM was trained on and the language
of the examples in our tasks (Jiang et al., 2019).

To tackle this, we fine-tune models with a
small number of examples. We assume that if the
LM representations are useful for a task, it will
require few examples to overcome the language

746

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
3
4
2
1
9
2
3
7
1
6

/

/
t

yo

a
C
_
a
_
0
0
3
4
2
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Modelo

Zero MLPMLM

LINEAR

LANGSENSE

shot WS MAX WS MAX pert nolang

RoBERTa-L
98
BERT-WWM 70
50
BERT-L
68
RoBERTa-B
49
BERT-B
49
Base

98
82
52
75
49
58

100
100
57
91
50
79

97
69
50
69
50

100
85
51
84
50

31
13
1
24
0
0

51
15
0
25
0
0

Mesa 2: AGE-COMPARE results. Accuracy over two
answer candidates (random is 50%). LANGSENSE
are the Language Sensitivity controls, pert is
PERTURBED LANG. and nolang is NO LANG. El
baseline row is MLM-BASELINE.

For AGE-COMPARE, the solid lines in Figure 2B
illustrate the learning curves of ROBERTA-L and
BERT-WWM, and Table 2 shows the aggregate
Estadísticas. We fine-tune the model by replacing
AGE-1 and AGE-2 with values between 43 y
120, but test with values between 15 y 38, a
guarantee that the model generalizes to values
unseen at
training time. De nuevo, we see that
the representations learned by ROBERTA-L are
already equipped with the knowledge necessary
for solving this task.

3.3 Control S

Comparing learning curves tells us which model
learns from fewer examples. Sin embargo, porque
highly parameterized MLPs, as used in LMs,
can approximate a wide range of functions, él
is difficult to determine whether performance is
tied to the knowledge acquired at pre-training
tiempo, or to the process of fine-tuning itself. Nosotros
present controls that attempt to disentangle these
two factors.

Are LMs sensitive to the language input?
We are interested in whether pre-trained repre-
sentations reason over language examples. De este modo,
a natural control is to present the reasoning task
without language and inspect performance. If the
learning curve of a model does not change when
the input is perturbed or even mostly deleted, entonces
the model shows low language sensitivity and
the pre-trained representations do not explain the
probe performance. This approach is related to
work by Hewitt and Liang (2019), who proposed
a control task, where the learning curve of a model
is compared to a learning curve when words are

Cifra 2: An illustration of our evaluation protocol.
We compare ROBERTA-L (verde) and BERT-WWM
(azul), controls are in dashed lines and markers are
described in the legends. Zero-shot evaluation on the
arriba a la izquierda, AGE-1 is ‘‘younger’’ (in color) vs. ‘‘older’’
(in white) than AGE-2.

mismatch and achieve high performance. In most
casos, we train with N ∈ {62, 125, 250, 500, 1k,
2k, 4k} examples. To account for optimization
instabilities, we fine-tune several
times with
different seeds, and report average accuracy across
seeds. The representations h are fixed during fine-
tuning, and we only fine-tune the parameters of
MLPMLM.

Evaluation
and Learning-curve Metrics
Learning curves are informative, but inspecting
many learning curves can be difficult. De este modo, nosotros
summarize them using two aggregate statistics.
Nosotros informamos: (a) MAX, eso es, the maximal accuracy
on the learning curve, used to estimate how well
the model can handle the task given the limited
amount of examples. (b) The metric WS, cual
is a weighted average of accuracies across the
learning curve, where higher weights are given to
points where N is small.3 WS is related to the area
under the accuracy curve, and to the online code
métrico, proposed by Yogatama et al. (2019) y
Blier and Ollivier (2018). The linearly decreasing
weights emphasizes our focus on performance
given little training data, as it highlights what was
encoded by the model before fine-tuning.

3We use the decreasing weights W = (0.23, 0.2, 0.17,

0.14, 0.11, 0.08, 0.07).

747

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
3
4
2
1
9
2
3
7
1
6

/

/
t

yo

a
C
_
a
_
0
0
3
4
2
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

associated with random behavior. We propose
two control tasks:
NO LANGUAGE control We remove all
aporte
tokens, excepto por [MASK] and the arguments
of the task, a saber, the tokens that are necessary
for computing the output. In AGE-COMPARE, un
example is reduced to the phrase ‘‘24 [MASK]
55'', where the candidate answers are the words
‘‘blah’’, for ‘‘older’’, and ‘‘ya’’, for ‘‘younger’’.
If the learning curve is similar to when the full
example is given (low language sensitivity), entonces
the LM is not strongly using the language input.

The dashed lines in Figure 2B illustrate the
learning curves in NO LANG.: ROBERTA-L (verde)
shows high language sensitivity, while BERT-
WWM (azul) has lower language sensitivity. Este
suggests it handles this task partially during fine-
tuning. Mesa 2 paints a similar picture, dónde
the metric we use is identical to WS, excepto
that instead of averaging accuracies, we average
the difference in accuracies between the standard
model and NO LANG. (rounding negative numbers
to zero). For ROBERTA-L the value is 51, porque
ROBERTA-L gets almost 100% acc. in the presence
of language, and is random (50% acc.) sin
idioma.

PERTURBED LANGUAGE control A more targeted
language control, is to replace words that are
central for the reasoning task with nonsense words.
Específicamente, we pick key words in each probe
template, and replace these words by randomly
sampling from a list of 10 words that carry
relatively limited meaning.4 For example,
en
PROPERTY CONJUNCTION, we can replace the word
‘‘and’’ with the word ‘‘blah’’ to get the example
‘‘What is located at hand blah used for writing?''.
If the learning curve of PERTURBED LANG. is similar
to the original example, then the model does not
utilize the pre-trained representation of ‘‘and’’ to
solve the task, and may not capture its effect on
the semantics of the statement.

Targeted words change from probe to probe.
Por ejemplo, in AGE-COMPARE, the targeted words
are ‘‘age’’ and ‘‘than’’, resulting in examples like
‘‘A AGE-1 year old person is [MASK] blah me in
da, If i am a AGE-2 year old person.’’ Figure 2C
shows the learning curves for ROBERTA-L and
BERT-WWM, where solid lines corresponds to
the original examples and dashed lines are the

4The list of substitutions is: ‘‘blah’’, ‘‘ya’’, ‘‘foo’’,
‘‘snap’’, ‘‘woo’’, ‘‘boo’’, ‘‘da’’, ‘‘wee’’, ‘‘foe’’ and ‘‘fee’’.

PERTURBED LANG. control. Despite this minor
the performance of ROBERTA-L
perturbation,
substantially decreases, implying that the model
needs the input. En cambio, BERT-WWM perfor-
mance decreases only moderately.

Does a linear transformation suffice? In MC-
MLM, the representations h are fixed, y solo
the pre-trained parameters of MLPMLM are fine-
tuned. As a proxy for measuring ‘‘how far» el
representations are from solving a task, we fix
the weights of the first layer of MLPMLM, y
only train the final layer. Succeeding in this setup
means that only a linear transformation of h is
required. Mesa 2 shows the performance of this
setup (LINEAR), compared with MLPMLM.

Why is MC-MLM preferred over MC-QA?
Figure 2D compares the learning curves of MC-
MLM and MC-QA in AGE-COMPARE. Because in
MC-QA, the network MLPQA cannot be initialized
by pre-trained weights, zero-shot evaluation is
not meaningful, and more training examples are
needed to train MLPQA. Still, the trends observed
in MC-MLM remain, with ROBERTA-L achieving
best performance with the fewest examples.

4 The oLMpic Games

We now move to describe the research questions
and various probes used to answer these questions.
For each task we describe how it was constructed,
show results via a table as described in the controls
sección, and present an analysis.

Our probes are mostly targeted towards
symbolic reasoning skills (Mesa 1). We examine
the ability of
language models to compare
numbers, to understand whether an object has
a conjunction of properties, to perform multi-hop
composition of facts, among others. Sin embargo,
since we generate examples automatically from
existing resources, some probes also require
background knowledge, such as sizes of objects.
Además, as explained in §3.1, we test models
on a manually-picked phrasing that might interact
with the language abilities of the model. De este modo,
when a model succeeds this is evidence that
it has the necessary skill, but failure could be
attributed to issues with background knowledge
and linguistic abilities as well. In each probe,
we will explicitly mention what knowledge and
language abilities are necessary.

748

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
3
4
2
1
9
2
3
7
1
6

/

/
t

yo

a
C
_
a
_
0
0
3
4
2
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

4.1 Can LMs perform robust comparison?

Comparing two numeric values requires repre-
senting the values and performing the comparison
operaciones. In §3 we saw the AGE-COMPARE task,
in which ages of two people were compared. Nosotros
found that ROBERTA-L and to some extent BERT-
WWM were able to handle this task, performing
well under the controls. We expand on this to
related comparison tasks and perturbations that
assess the sensitivity of LMs to the particular
context and to the numerical value.

Is ROBERTA-L comparing numbers or ages?
ROBERTA-L obtained zero-shot acc. de 98% en
AGE-COMPARE. But is it robust? We test this using
perturbations to the task and present the results in
Cifra 3. Figure 3A corresponds to the experiment
from §3, where we observed that ROBERTA-L
predicts ‘‘younger’’ (blue pixels) and ‘‘older’’
(white pixels) almost perfectly.

To test whether ROBERTA-L can compare ages
given the birth year rather than the age, we use the
statement ‘‘A person born in YEAR-1 is [MASK]
than me in age, If i was born in YEAR-2.’’
Figure 3B shows that it correctly flips ‘‘younger’’
to ‘‘older’’ (76% acc.), reasoning that a person
born in 1980 is older than one born in 2000.

Sin embargo, when evaluated on the exact same
statement, but with values corresponding to typical
ages instead of years (Figura 3D), ROBERTA-L
obtains an acc. de 12%, consistently outputting
the opposite prediction. With ages as values and
not years, it seems to disregard the language,
performing the comparison based on the values
solo. We will revisit this tendency in §4.4.

Symmetrically, Figure 3C shows results when
numeric values of ages are swapped with typical
years of birth. ROBERTA-L is unable to handle this,
always predicting ‘‘older’’.5 This emphasizes that
the model is sensitive to the argument values.

Can Language Models compare object sizes?
Comparing physical properties of objects requires
knowledge of the numeric value of the property
and the ability to perform comparison. Anterior
work has shown that such knowledge can be
extracted from text and images (Bagherinezhad
et al., 2016; Forbes and Choi, 2017; Yang et al.,
2018a; Elazar et al., 2019; Pezzelle and Fern´andez,

5We observed that in neutral contexts models have a
slight preference for ‘‘older’’ over ‘‘younger’’, which could
potentially explain this result.

749

Cifra 3: AGE COMPARISON perturbations. Left side
graphs are age-comparison, right side graphs are age
comparison by birth-year. In the bottom row, the values
of ages are swapped with birth-years and vice versa.
In blue pixels the model predicts ‘‘older’’, in white
‘‘younger’’. (A) is the correct answer.

2019). Can LMs do the same? Probe Construction
We construct statements of the form ‘‘The size
of a OBJ-1 is usually much [MASK] than the
size of a OBJ-2.’’, where the candidate answers
are ‘‘larger’’ and ‘‘smaller’’. To instantiate the
two objects, we manually sample from a list of
objects from two domains: animals (p.ej. ‘‘camel’’)
and general objects (p.ej. ‘‘sun’’), and use the
first domain for training and the second for
evaluación. We bucket different objects based on
the numerical value of their size based on their
median value in DOQ (Elazar et al., 2019), y
then manually fix any errors. This probe requires
prior knowledge of object sizes and understanding
of a comparative language construction. En general,
we collected 127 y 35 objects for training
and development, respectivamente. We automatically
instantiate object slots using objects that are in the
same bucket.
Results ROBERTA-L excels in this task, starting
de 84% acc. in the zero-shot setup and reaching
MAX of 91% (Mesa 3). Other models start with
random performance and are roughly on par with
MLM-BASELINE. ROBERTA-L shows sensitivity
to the language, suggesting that the ability to
compare object sizes is encoded in it.
Analysis Table 4 shows results of
running
ROBERTA-L in the zero-shot setup over pairs
of objects, where we sampled a single object
from each bucket. Objects are ordered by their

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
3
4
2
1
9
2
3
7
1
6

/

/
t

yo

a
C
_
a
_
0
0
3
4
2
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Modelo

Zero MLPMLM

LINEAR

LANGSENSE

Modelo

Zero MLPMLM

LINEAR

LANGSENSE

shot WS MAX WS MAX pert nolang

shot WS MAX WS MAX pert nolang

RoBERTa-L
84
BERT-WWM 55
52
BERT-L
56
BERT-B
50
RoBERTa-B
46
Base

88
65
56
55
61
57

91
81
66
72
74
74

86
63
53
53
57

90
77
56
56
66

22
9
5
2
8
2

26
9
4
3
0
1

RoBERTa-L
14
BERT-WWM 10
22
BERT-L
11
BERT-B
15
RoBERTa-B
20
Base

44
46
45
44
43
46

55
57
55
56
53
56

26
32
36
30
25

41
52
50
52
44

3
2
3
3
2
1

5
3
8
8
6
2

Mesa 3: Results for the OBJECTS COMPARISON
probe. Accuracy over two answer candidates
(random is 50%).

Mesa 5: Results for the ALWAYS-NEVER probe.
Accuracy over five answer candidates (aleatorio
es 20%).

nail

pen

laptop

mesa

house

airplane

city

sun

nail
pen
laptop
mesa
house
airplane
city
sun


smaller
más grande
más grande
más grande
más grande
más grande
más grande

smaller

más grande
más grande
más grande
más grande
más grande
más grande

smaller
smaller

más grande
más grande
más grande
más grande
más grande

smaller
smaller
más grande

más grande
más grande
más grande
más grande

smaller
smaller
smaller
smaller

más grande
más grande
más grande

smaller
smaller
smaller
más grande
más grande

más grande
más grande

smaller
smaller
smaller
smaller
smaller
más grande

más grande

smaller
smaller
smaller
más grande
más grande
más grande
más grande

Mesa 4: ROBERTA-L Zero-shot SIZE COMP.
predicciones.

size from small to large. En general, ROBERTA-L
correctly predicts ‘‘larger’’ below the diagonal,
and ‘‘smaller’’ above it. Curiosamente, errors are
concentrated around the diagonal, due to the
more fine-grained differences in sizes, and when
we compare objects to ‘‘sun’’, mostly emitting
‘‘larger’’, ignoring the rest of the statement.

4.2 Do LMs know ‘‘always’’ from ‘‘often’’?

Adverbial modifiers such as ‘‘always’’, ‘‘some-
times’’, or ‘‘never’’, tell us about the quantity or
frequency of events (Luis, 1975; Barwise and
Cooper, 1981). Anecdotally, when ROBERTA-L
predicts a completion for the phrase ‘‘Cats usually
drink [MASK].'', the top completion is ‘‘coffee’’,
a frequent drink in the literature it was trained on,
rather then ‘‘water’’. Sin embargo, humans know that
‘‘Cats NEVER drink coffee’’. Prior work explored
retrieving the correct quantifier for a statement
(Herbelot and Vecchi, 2015; Wang y cols., 2017).
Here we adapt this task to a masked language
modelo.

The ‘‘Always-Never’’ task We present state-
mentos, such as ‘‘rhinoceros [MASK] have fur’’,
with answer candidates, such as ‘‘never’’ or
‘‘always’’. To succeed, the model must know
the frequency of an event, and map the appropriate
adverbial modifier to that representation. lin-
guistically, the task tests how well the model
predicts frequency quantifiers (or adverbs) modificación-

ifying predicates in different statements (Lepore
and Ludwig, 2007).
Probe Construction We manually craft templates
that contain one slot for a subject and another
for an object, p.ej., ‘‘FOOD-TYPE is [MASK]
part of a ANIMAL’s diet.’’ (more examples avail-
able in Table 6). The subject slot is instantiated
the correct semantic type,
with concepts of
according to the isA predicate in CONCEPTNET.
In the example above we will find concepts
that are of type FOOD-TYPE and ANIMAL. El
object slot
is then instantiated by forming
masked templates of the form ‘‘meat is part of
a [MASK]’s diet.’’ and ‘‘cats have [MASK].’’ and
letting BERT-L produce the top-20 completions.
We filter out completions that do not have
the correct semantic type according to the isA
predicate. Finalmente, we crowdsource gold answers
using Amazon Mechanical Turk. Annotators were
presented with an instantiated template (con el
masked token removed), such as ‘‘Chickens have
horns.’’ and chose the correct answer from 5
candidates: ‘‘never’’, ‘‘rarely’’, ‘‘sometimes’’,
‘‘often’’, and ‘‘always’’.6 We collected 1,300
examples with 1,000 used for training and 300
for evaluation.

We note that some examples in this probe are
similar to OBJECTS COMPARISON (line 4 en mesa 5).
Sin embargo, the model must also determine if sizes
can be overlapping, which is the case in 56% de
the examples.
Results Table 5 shows the results, where random
accuracy is 20%, and majority vote accuracy is
35.5%. In the zero-shot setup, acc. is less than
aleatorio. In the MLPMLM and LINEAR setup acc.
reaches a maximum of 57% in BERT-L, pero

6The class distribution over the answers is ‘‘never’’:
24%, ‘‘rarely’’: 10%, ‘‘sometimes’’: 34%, ‘‘often’’: 7%, y
‘‘always’’: 23%.

750

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
3
4
2
1
9
2
3
7
1
6

/

/
t

yo

a
C
_
a
_
0
0
3
4
2
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Question
A dish with pasta [MASK] contains pork .
stool is [MASK] placed in the box .
A lizard [MASK] has a wing .
A pig is [MASK] smaller than a cat .
meat is [MASK] part of a elephant’s diet .
A calf is [MASK] larger than a dog .

Answer
sometimes
never
never
rarely
never
sometimes

Distractor Acc.
sometimes
sometimes
always
always
sometimes
a menudo

75
68
61
47
41
30

Mesa 6: Error analysis for ALWAYS-NEVER. Modelo
predictions are in bold, and Acc. shows acc. por
template.

MLM-BASELINE obtains similar acc.,
implying
that the task was mostly tackled at fine-tuning
tiempo, and the pre-trained representations did not
contribute much. Language controls strengthen
this hypothesis, where performance hardly drops
in the PERTURBED LANG. control and slightly drops
in the NO LANG. control. Figure 1B compares the
learning curve of ROBERTA-L with controls. MLM-
BASELINE consistently outperforms ROBERTA-L,
which display only minor language sensitivity,
suggesting that pre-training is not effective for
solving this task.

Analysis We generated predictions from the
best model, BERT-WWM, and show analysis
results in Table 6. For reference, we only selected
examples where human majority vote led to the
correct answer, and thus the majority vote is near
100% on these examples. Although the answers
‘‘often’’ and ‘‘rarely’’ are the gold answer in 19%
of the training data, the LMs predict these answers
in less than 1% of examples. In the template ‘‘A
dish with FOOD-TYPE [MASK] contains FOOD-
TYPE.’’ the LM always predicts ‘‘sometimes’’.
En general, we find models do not perform well.
Reporting bias (Gordon and Van Durme, 2013)
may play a role in the inability to correctly
determine that ‘‘A rhinoceros NEVER has fur.’’
Curiosamente, behavioral research conducted on
blind humans shows they exhibit a similar bias
(Kim y cols., 2019).

4.3 Do LMs Capture Negation?

Idealmente, the presence of the word ‘‘not’’ should
affect the prediction of a masked token. Sin embargo,
Several recent works have shown that LMs do
not take into account the presence of negation
in sentences (Ettinger, 2019; Nie et al., 2020;
Kassner and Sch¨utze, 2020). Aquí, we add to this
literature, by probing whether LMs can properly
use negation in the context of synonyms vs.
antonyms.

751

Modelo

Zero MLPMLM

LINEAR

LANGSENSE

shot WS MAX WS MAX pert nolang

RoBERTa-L
75
BERT-WWM 57
51
BERT-L
52
BERT-B
57
RoBERTa-B
47
Base

85
70
70
68
74
67

91
81
82
81
87
80

77
61
58
59
63

84
73
74
74
78

14
5
5
2
10
0

21
6
9
9
16
0

Mesa 7: Results for the ANTONYM NEGATION
probe. Accuracy over two answer candidates
(random is 50%).

Do LMs Capture the Semantics of Antonyms?
In the statement ‘‘He was [MASK] fast, he was
very slow.’’, [MASK] should be replaced with
‘‘not’’, since ‘‘fast’’ and ‘‘slow’’ are antonyms.
En cambio, in ‘‘He was [MASK] fast, he was
very rapid’’, the LM should choose a word like
‘‘very’’ in the presence of the synonyms ‘‘fast’’
and ‘‘rapid’’. An LM that correctly distinguishes
between ‘‘not’’ and ‘‘very’’, demonstrates knowl-
edge of the taxonomic relations as well as the
ability to reason about the usage of negation in
this context.
Probe Construction We sample synonym and
antonym pairs from CONCEPTNET (Speer et al.,
2017) and WORDNET (Fellbaum, 1998), and use
Google Books Corpus to choose pairs that occur
frequently in language. We make use of the state-
ments introduced above. Half of the examples
are synonym pairs and half antonyms, generat-
En g 4,000 training examples and 500 for evalu-
ación. Linguistically, we test whether the model
appropriately predicts a negation vs. intensifica-
tion adverb based on synonymy/antonymy rela-
tions between nouns, adjectives and verbs.
Results ROBERTA-L shows higher than chance
acc. de 75% in the zero-shot setting, as well as high
Language Sensitivity (Mesa 7). MLM-BASELINE,
equipped with GloVe word embeddings, is able to
reach a comparable WS of 67 and MAX of 80%,
suggesting they do not have a large advantage on
this task.

4.4 Can LMs handle conjunctions of facts?

We present two probes where a model should
understand the reasoning expressed by the word
y.

conjunction CONCEPTNET

Property
a
Knowledge-Base that describes the properties of
millions of concepts through its (sujeto,

es

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
3
4
2
1
9
2
3
7
1
6

/

/
t

yo

a
C
_
a
_
0
0
3
4
2
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Modelo

LEARNCURVE

LANGSENSE

WS MAX

pert

nolang

RoBERTa-L
49
BERT-WWM 46
48
BERT-L
47
BERT-B
40
RoBERTa-B
39
Base

87
80
75
71
57
49

2
0
2
2
0
0

4
1
5
1
0
0

Mesa 8: Results for the PROPERTY CONJUNCTION
probe. Accuracy over three answer candidates
(random is 33%).

predicate, object) triples. Nosotros
usar
CONCEPNET to test whether LMs can find concepts
for which a conjunction of properties holds. Para
ejemplo, we will create a question like ‘‘What is
located in a street and is related to octagon?'',
is ‘‘street sign’’.
where the correct answer
Because answers are drawn from CONCEPTNET,
they often consist of more than one word-piece,
thus examples are generated in the MC-QA setup.
Probe Construction To construct an example, nosotros
first choose a concept that has two properties in
CONCEPTNET, where a property is a (predicate,
object) pair. Por ejemplo, stop sign has
the properties (atLocation,street) y
(relatedTo, octagon). Entonces, we create two
distractor concepts, for which only one prop-
erty holds: car has the property (atLocation,
street), and math has the property (relatedTo,
octagon). Given the answer concept, el dis-
tractors and the properties, we can automatically
generate pseudo-langauge questions and answers
by mapping 15 CONCEPTNET predicates to natural
language questions. We split examples such that
concepts in training and evaluation are disjoint.
This linguistic structure tests whether the LM
can answer questions with conjoined predicates,
requiring world knowledge of object and relations.

Results In MC-QA, we fine-tune the entire
network and do not freeze any representations.
Zero-shot cannot be applied because the weights
of MLPQA are untrained. All LMs consistently
improve as the number of examples increases,
reaching a MAX of 57% a 87% (Mesa 8). El
high MAX results suggest that the LMs generally
have the required pre-existing knowledge. The WS
of most models is slightly higher than the baselines
(49% MAX and 39 WS). Language Sensitivity is

slightly higher than zero in some models. En general,
results suggest the LMs do have some capability
in this task, but proximity to baseline results, y
low language selectivity make it hard to clearly
determine whether it existed before fine-tuning.

To further validate our findings, construimos un
parallel version of our data, where we replace
the word ‘‘and’’ by the phrase ‘‘but not’’.
In this version, the correct answer is the first
distractor in the original experiment, where one
property holds and the other does not. En general,
we observe a similar trend (with an increase
in performance across all models): MAX results
are high (79-96%), pointing that the LMs hold
the relevant information, but improvement over
ESIM-Baseline and language sensitivity are low.
Para ser breve, we omit the detailed numerical results.

Taxonomy conjunction A different operation is
to find properties that are shared by two concepts.
Específicamente, we test whether LMs can find the
mutual hypernym of a pair of concepts. Para
ejemplo, ‘‘A germ and a human are both a type
de [MASK].'', where the answer is ‘‘organism’’.
Probe Construction We use CONCEPTNET and
WORDNET to find pairs of concepts and their
hypernyms, keeping only pairs that frequently
appear in the GOOGLE BOOK CORPUS. The example
template is ‘‘A ENT-1 and a ENT-2 are both
a type of [MASK].'', where ENT-1 and ENT-2
are replaced with entities that have a common
hypernym, which is the gold answer. Distractors
are concepts that are hypernyms of ENT-1, pero
not ENT-2, or vice versa. For evaluation, nosotros
keep all examples related to food and animal
taxonomies, Por ejemplo, ‘‘A beer and a ricotta
are both a type of [MASK].'', where the answer
is ‘‘food’’ and the distractors are ‘‘cheese’’ and
‘‘alcohol’’. This phrasing requires the model to
handle conjoined co-hyponyms in the subject
posición, based on lexical relations of hyponymy /
hypernymy between nouns. For training, we use
examples from different taxonomic trees, semejante
that the concepts in the training and evaluation
sets are disjoint.
Results Table 9 shows that models’ zero-shot
acc. is substantially higher than random (33%),
but overall even after fine-tuning acc. is at most
59%. Sin embargo, the NO LANG. control shows some
language sensitivity, suggesting that some models
have pre-existing capabilities.

752

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
3
4
2
1
9
2
3
7
1
6

/

/
t

yo

a
C
_
a
_
0
0
3
4
2
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Modelo

Zero MLPMLM

LINEAR

LANGSENSE

Modelo

LEARNCURVE

LANGSENSE

shot WS MAX WS MAX pert nolang

WS

MAX

pert

nolang

RoBERTa-L
45
BERT-WWM 46
53
BERT-L
47
BERT-B
46
RoBERTa-B
33
Base

50
48
54
48
50
33

56
52
57
50
59
47

45
46
53
47
47

46
46
54
47
49

0
0
0
0
0
1

3
7
15
12
18
2

RoBERTa-L
BERT-WWM
BERT-L
BERT-B
RoBERTa-B
ESIM-Baseline

42
47
45
43
41
49

50
53
51
48
46
54

0
1
1
0
0
3

2
4
4
3
0
0

Mesa 9: Results for the TAXONOMY CONJUNCTION
probe. Accuracy over three answer candidates
(random is 33%).

Mesa 10: Results for ENCYCLOPEDIC COMPOSITION.
Accuracy over three answer candidates (aleatorio
es 33%).

Analysis Analyzing the errors of ROBERTA-L, nosotros
found that a typical error is predicting for ‘‘A
crow and a horse are both a type of [MASK].''
that the answer is ‘‘bird’’, rather than ‘‘animal’’.
Específicamente, LMs prefer hypernyms that are closer
in terms of edge distance on the taxonomy tree.
De este modo, a crow is first a bird, and then an animal.
We find that when distractors are closer to one of
the entities in the statement than the gold answer,
the models will consistently (80%) choose the
distractor, ignoring the second entity in the phrase.

4.5 Can LMs do multi-hop reasoning?

Questions that require multi-hop reasoning, semejante
as ‘‘Who is the director of
the movie about
a WW2 pacific medic?'', have recently drawn
atención (Yang et al., 2018b; Welbl et al., 2018;
Talmor y Berant, 2018) as a challenging task
for contemporary models. But do pre-trained LMs
have some internal mechanism to handle such
preguntas?

Para abordar esta pregunta, we create two probes,
one for compositional question answering, y
the other uses a multi-hop setup, building upon
our observation (§3) that some LMs can compare
siglos.

Encyclopedic composition We construct ques-
tions such as ‘‘When did the band where John
Lennon played first form?''. Here answers require
multiple tokens, thus we use the MC-QA setup.
Probe Construction We use the following three
templates: (1) ‘‘when did the band where ENT
played first form?'', (2) ‘‘who is the spouse of the
actor that played in ENT?’’ and (3) ‘‘where is the
headquarters of the company that ENT established
located?''. We instantiate ENT using information
from WIKIDATA (Vrandeˇci´c and Kr˝otzsch, 2014),
choosing challenging distractors. Por ejemplo,
the distractor will be a year
for template 1,

753

Cifra 4: Learning curves in two tasks. For each task,
the best performing LM is shown alongside the NO
LANG. control and baseline model. (A) is the correct
respuesta.

close to the gold answer, and for template 3,
it will be a city in the same country as the gold
answer city. This linguistic structure introduces
a (restrictive) relative clauses that requires a)
Correctly resolving the reference of the noun
modified by the relative clause, b) Answering
the full question subsequently.

To solve the question, the model must have
knowledge of all single-hop encyclopedic facts
required for answering it. De este modo, we first fine-tune
the model on all such facts (p.ej., ‘‘What company
did Bill Gates establish? Microsoft’’) desde el
training and evaluation set, and then fine-tune on
multi-hop composition.
Results Results are summarized in Table 10. Todo
models achieve low acc. in this task, y el
baseline performs best with a MAX of 54%. Lan-
guage sensitivity of all models is small, and MLM-
BASELINE performs slightly better (Figura 4B),
suggesting that the LMs are unable to resolve
compositional questions, but also struggle to learn
it with some supervision.

Multi-hop Comparison Multi-hop reasoning
can be found in many common structures in
natural language. In the phrase ‘‘When comparing

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
3
4
2
1
9
2
3
7
1
6

/

/
t

yo

a
C
_
a
_
0
0
3
4
2
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Modelo

Zero MLPMLM

LINEAR

LANGSENSE

shot WS MAX WS MAX pert nolang

RoBERTa-L
29
BERT-WWM 33
33
BERT-L
32
BERT-B
33
RoBERTa-B
34
Base

36
41
32
33
32
35

49
65
35
35
40
48

31
32
31
33
29

41
36
34
35
33

2
6
0
0
0
1

2
4
3
2
0
0

Mesa 11: Results for COMPOSITIONAL COMPARISON.
Accuracy over three answer candidates (aleatorio
es 33%).

a 83 year old, a 63 year old and a 56 year old,
el [MASK] is oldest’’ one must find the oldest
persona, then refer to its ordering: primero, segundo, o
tercero.
Probe Construction We use the template above,
treating the ages as arguments, and ‘‘first’’, ‘‘sec-
ond’’, and ‘‘third’’ as answers. Age arguments
are in the same ranges as in AGE-COMPARE. Linguis-
tically, the task requires predicting the subject of
sentences whose predicate is in a superlative form,
where the relevant information is contained in a
‘‘when’’-clause. The sentence also contains nom-
inal ellipsis, also known as fused-heads (Elazar
and Goldberg, 2019).
Results All
three possible answers appear
in ROBERTA-L’s top-10 zero-shot predictions,
indicating that the model sees the answers as viable
choices. Although successful in AGE-COMPARE, el
performance of ROBERTA-L is poor in this probe
(Mesa 11), With zero-shot acc. that is almost
aleatorio, WS slightly above random, MAX lower
than MLM-BASELINE (48%), and close to zero
language sensitivity. All LMs seem to be learning
the task during probing. Although BERT-WWM
was able to partially solve the task with a MAX of
65% when approaching 4,000 training examples,
the models do not appear to show multi-step
capability in this task.

5 Medals

We summarize the results of the oLMpic Games in
Mesa 12. Generally, the LMs did not demonstrate
strong pre-training capabilities in these symbolic
reasoning tasks. BERT-WWM showed partial
success in a few tasks, whereas ROBERTA-L
showed high performance in ALWAYS-NEVER,
OBJECTS COMPARISON and ANTONYM NEGATION, y
emerges as the most promising LM. Sin embargo,

RoBERTa
Large

BERT BERT RoBERTa BERT
Base
WWM Large

Base

X

X
X
X

ALWAYS-NEVER
AGE COMPARISON
OBJECTS COMPAR.
ANTONYM NEG.
PROPERTY CONJ.
TAXONOMY CONJ.
ENCYC. COMP.
MULTI-HOP COMP.

Mesa 12: The oLMpic games medals, sum-
marizing per-task success. X indicate the LM has
achieved high accuracy considering controls and
baselines, X–indicates partial success.

when perturbed, ROBERTA-L has failed to demon-
strates consistent generalization and abstraction.

Analysis of correlation with pre-training data
A possible hypothesis for why a particular model
is successful in a particular task might be that the
language of a probe is more common in the corpus
it was pre-trained on. To check that, we compute
the unigram distribution over the training corpus
of both BERT and ROBERTA. We then compute
the average log probability of the development set
under these two unigram distributions for each task
(taking into account only content words). Finalmente,
we compute the correlation between which model
performs better on a probe (ROBERTA-L vs.
BERT-WWM) and which training corpus induces
higher average log probability on that probe. Nosotros
find that the Spearman correlation is 0.22, hinting
that the unigram distributions do not fully explain
the difference in performance.

6 Discusión

We presented eight different tasks for evaluating
the reasoning abilities of models, alongside an
evaluation protocol for disentangling pre-training
from fine-tuning. We found that even models that
have identical structure and objective functions
differ not only quantitatively but also qualitatively.
Específicamente, ROBERTA-L has shown reasoning
abilities that are absent from other models. De este modo,
with appropriate data and optimization, modelos
can acquire from an LM objective skills that might
be surprising intuitively.

Sin embargo, when current LMs succeed in a
reasoning task, they do not do so through ab-
straction and composition as humans perceive it.
The abilities are context-dependent, if ages are
compared–then the numbers should be typical
siglos. Discrepancies from the training distribution

754

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
3
4
2
1
9
2
3
7
1
6

/

/
t

yo

a
C
_
a
_
0
0
3
4
2
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

lead to large drops in performance. Last,
el
performance of LM in many reasoning tasks is
poor.

Our work sheds light on some of the blind spots
of current LMs. We will release our code and
data to help researchers evaluate the reasoning
abilities of models, aid the design of new probes,
and guide future work on pre-training, objetivo
functions and model design for endowing models
with capabilities they are currently lacking.

Expresiones de gratitud

This work was completed in partial fulfillment for
the PhD degree of the first author. We thank
our colleagues at The Allen Institute of AI,
especially Kyle Richardson, Asaf Amrami, Mor
Pipek, Myle Ott, Hillel Taub-Tabib, and Reut
Tsarfaty. This research was partially supported
by The Israel Science Foundation grant 942/16,
The Blavatnik Computer Science Research
Fund and The Yandex Initiative for Machine
Aprendiendo, and the European Union’s Seventh
Framework Programme
(FP7) under grant
agreements no. 802774-ERC-iEXTRACT and no.
802800-DELPHI.

Referencias

Yossi Adi, Einat Kermany, Yonatan Belinkov,
Ofer Lavi, and Yoav Goldberg. 2016. Fine-
grained analysis of sentence embeddings using
auxiliary prediction tasks. arXiv preprint
arXiv:1608.04207.

Hessam Bagherinezhad, Hannaneh Hajishirzi,
Yejin Choi, and Ali Farhadi. 2016. Are ele-
phants bigger than butterflies? reasoning about
sizes of objects. In Thirtieth AAAI Conference
sobre Inteligencia Artificial.

Jon Barwise and Robin Cooper. 1981. Generalized
quantifiers and natural language, Philosophy,
idioma, and artificial intelligence, Saltador,
pages 241–301. DOI: https://doi.org
/10.1007/978-94-009-2727-8 10

Yonatan Belinkov and James Glass. 2019. Anal-
ysis methods in neural language processing:
A survey. Transactions of the Association for
Ligüística computacional, 7:49–72. DOI:
https://doi.org/10.1162/tacl a 00254

L´eonard Blier and Yann Ollivier. 2018. The de-
scription length of deep learning models. En

Avances en el procesamiento de información neuronal
Sistemas, pages 2216–2226.

Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei,
Hui Jiang, and Diana Inkpen. 2017. Enhanced
LSTM for natural language inference. En profesional-
ceedings of the 55th Annual Meeting of the
Asociación de Lingüística Computacional
(Volumen 1: Artículos largos), pages 1657–1668,
vancouver, Canada. Asociación de Computación-
lingüística nacional. DOI: https://doi
.org/10.18653/v1/P17-1152

Andy Coenen, Emily Reif, Ann Yuan, Been Kim,
Adam Pearce, Fernanda Vi´egas, and Martin
Wattenberg. 2019. Visualizing and measuring
the geometry of BERT. arXiv preimpresión arXiv:
1906.02715.

Andrew M. Dai and Quoc V. Le. 2015, Semi-
supervised sequence learning, C. Cortes, norte. D.
lorenzo, D. D. Sotavento, METRO. Sugiyama, y r.
Garnett, editores, Advances in Neural Informa-
tion Processing Systems 28, pages 3079–3087.
Asociados Curran, Cª,

j. Devlin, METRO. Chang, k. Sotavento, and K. Toutanova.
2019. BERT: Pre-training of deep bidirectional
transformers for language understanding. En
North American Association for Computational
Lingüística (NAACL).

Yanai Elazar and Yoav Goldberg. 2019. Wheres
mi cabeza? definition, data set, and models for
numeric fused-head identification and resolu-
ción. Transactions of
la Asociación para
Ligüística computacional, 7:519–535. DOI:
https://doi.org/10.1162/tacl a 00280

Yanai

Elazar, Abhijit Mahabal, Deepak
Ramachandran, Tania Bedrax-Weiss, and Dan
Roth. 2019. How large are lions? inducing
distributions over quantitative attributes. En
Actas de la 57ª Reunión Anual de
la Asociación de Lingüística Computacional,
pages 3973–3983, Florencia, Italia. Asociación
para Lingüística Computacional. DOI: https://
doi.org/10.18653/v1/P19-1388

Allyson Ettinger. 2019. What BERT is not:
Lessons from a new suite of psycholinguistic
diagnostics for language models. arXiv preprint
arXiv:1907.13528. DOI: https://doi.org
/10.1162/tacl a 00298

755

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
3
4
2
1
9
2
3
7
1
6

/

/
t

yo

a
C
_
a
_
0
0
3
4
2
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Allyson Ettinger, Ahmed Elgohary, and Philip
Resnik. 2016. Probing for semantic evidence
of composition by means of simple classifica-
tion tasks. In Proceedings of the 1st Workshop
on Evaluating Vector-Space Representations
for NLP, pages 134–139. DOI: https://
doi.org/10.18653/v1/W16-2524

C. Fellbaum. 1998. WordNet: An Electronic Lex-
ical Database. CON prensa. DOI: https://
doi.org/10.7551/mitpress/7287.001
.0001

Maxwell Forbes and Yejin Choi. 2017. Verb
física: Relative physical knowledge of
actions and objects. En procedimientos de
el
55ª Reunión Anual de la Asociación de
Ligüística computacional (Volumen 1: Largo
Documentos), pages 266–276. DOI: https://
doi.org/10.18653/v1/P17-1025

Yoav Goldberg. 2019. Assessing BERT’s syntac-
tic abilities. arXiv preimpresión arXiv:1901.05287.

Jonathan Gordon and Benjamin Van Durme. 2013.
Reporting bias and knowledge acquisition.
En procedimientos de
el 2013 Workshop on
Automated Knowledge Base Construction,
pages 25–30. ACM.

Aur´elie Herbelot and Eva Maria Vecchi. 2015.
Building a shared world: Mapping distribu-
tional to model-theoretic semantic spaces. En
Actas de la 2015 Conference on Em-
pirical Methods in Natural Language Pro-
cesando, pages 22–32. DOI: https://doi
.org/10.18653/v1/D15-1003

John Hewitt and Percy Liang. 2019. Designing
tareas.
and interpreting probes with control
el 2019 Conferencia sobre
En procedimientos de
Empirical Methods
in Natural Language
Procesamiento y IX Conjunción Internacional
Conferencia sobre procesamiento del lenguaje natural
(EMNLP-IJCNLP), pages 2733–2743. DOI:
https://doi.org/10.18653/v1/D19-1275

John Hewitt and Christopher D. Manning. 2019.
A structural probe for finding syntax in word
representaciones. In Proceedings of the Con-
diferencia de
el Capítulo Norteamericano de
la Asociación de Lingüística Computacional:
Tecnologías del lenguaje humano, NAACL-HLT,
pages 4129–4138.

Zhengbao Jiang, Frank F. Xu, Jun Araki, y
Graham Neubig. 2019. How can we know
what language models know? arXiv preprint
arXiv:1911.12543. DOI: https://doi.org
/10.1162/tacl a 00324

En procedimientos de

Nora Kassner and Hinrich Sch¨utze. 2020.
Negated and misprimed probes for pretrained
language models: Birds can talk, but cannot
the 58th Annual
fly.
Meeting of the Association for Computational
Lingüística, pages 7811–7818, En línea. también-
ciation for Computational Linguistics. DOI:
https://doi.org/10.18653/v1/2020
.acl-main.698

Judy S. kim, Giulia V. Elli, and Marina Bedny.
2019. Knowledge of animal appearance among
sighted and blind adults. Actas de la
Academia Nacional de Ciencias, 116(23):
11213–11222. DOI: https://doi.org/10
.1073/pnas.1900952116, PMID: 31113884,
PMCID: PMC6561279

Ernest Lepore and Kirk Ludwig. 2007. Donald
Davidson’s truth-theoretic semantics. Oxford
Prensa universitaria. DOI: https://doi.org
/10.1093/acprof:oso/9780199290932
.001.0001

David Lewis. 1975. Adverbs of quantifica-
ción. Formal semantics-the essential readings,
178:188. DOI: https://doi.org/10.1002
/9780470758335.ch7

Yongjie Lin, Yi Chern Tan, y robert frank.
2019. Open sesame: Getting inside BERTs
linguistic knowledge. En Actas de la
2019 ACL Workshop BlackboxNLP: Analyzing
and Interpreting Neural Networks for NLP,
pages 241–253.

Tal Linzen, Emmanuel Dupoux, and Yoav
Goldberg. 2016a. Assessing the ability of
LSTMs to learn syntax-sensitive dependencies.
TACL, 4:521–535. DOI: https://doi
.org/10.1162/tacl a 00115

Tal Linzen, D. Emmanuel, y G. Yoav. 2016b.
Assessing the ability of LSTMs to learn
syntax-sensitive dependencies. Transactions of
the Association for Computational Linguis-
tics (TACL), 4. DOI: https://doi.org
/10.1162/tacl a 00115

756

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
3
4
2
1
9
2
3
7
1
6

/

/
t

yo

a
C
_
a
_
0
0
3
4
2
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Yinhan Liu, Myle Ott, Naman Goyal, Jingfei
Du, Mandar Joshi, Danqi Chen, Omer Levy,
mike lewis, Lucas Zettlemoyer, and Veselin
Stoyanov. 2019. Roberta: A robustly optimized
bert pretraining approach. arXiv preprint
arXiv:1907.11692.

Todor Mihaylov, Peter Clark, Tushar Khot, y
Ashish Sabharwal. 2018. Can a suit of armor
conduct electricity? A new dataset for open
book question answering. In EMNLP.

Yixin Nie, Adina Williams, Emily Dinan,
Mohit Bansal, Jason Weston, and Douwe
Kiela. 2020. Adversarial NLI: un nuevo banco-
mark for natural language understanding. En
Actas de la 58ª Reunión Anual de
la Asociación de Lingüística Computacional,
pages 4885–4901, En línea. Asociación para
Ligüística computacional.

j. Pennington, R. Socher, and C. D. Manning.
2014. GloVe: Global vectors for word re-
presentación. In Empirical Methods in Nat-
(EMNLP),
ural
pages 1532–1543. DOI: https://doi.org
/10.3115/v1/D14-1162

Procesando

Idioma

METRO. mi. Peters, METRO. Neumann, METRO.

Iyyer, METRO.
jardinero, C. clark, k. Sotavento, y yo. Zettlemoyer.
2018a. Deep contextualized word represen-
taciones. In North American Association for
Ligüística computacional (NAACL). DOI:
https://doi.org/10.18653/v1/N18-1202

Matthew Peters, Mark Neumann,

Luke
Zettlemoyer, and Wen-tau Yih. 2018b. Dissect-
ing contextual word embeddings: Arquitectura
and representation. En Actas de la 2018
Jornada sobre Métodos Empíricos en Natural
Procesamiento del lenguaje, pages 1499–1509. DOI:
https://doi.org/10.18653/v1/D18-1179

Fabio Petroni, Tim Rockt¨aschel, Sebastian
Riedel,
Patrick Lewis, Anton Bakhtin,
Yuxiang Wu, and Alexander Miller. 2019.
Language models as knowledge bases? En
Actas de la 2019 Conference on Em-
pirical Methods
Idioma
Procesamiento y IX Conjunción Internacional
Conferencia sobre procesamiento del lenguaje natural
(EMNLP-IJCNLP), pages 2463–2473. DOI:
https://doi.org/10.18653/v1/D19-1250

in Natural

757

contextos.

leveraging visual

Sandro Pezzelle and Raquel Fern´andez. 2019.
Is the red square big? malevic: Modeling
adjectives
En
el 2019 Conferencia sobre
Actas de
Empirical Methods
in Natural Language
Procesamiento y IX Conjunción Internacional
Conferencia sobre procesamiento del lenguaje natural
(EMNLP-IJCNLP), pages 2858–2869. DOI:
https://doi.org/10.18653/v1/D19-1285

Alec Radford, Jeffrey Wu, niño rewon, David
Luan, Dario Amodei, and Ilya Sutskever. 2019.
Language models are unsupervised multitask
learners. OpenAI Blog, 1(8).

Ohad Rozen, Vered Shwartz, Roee Aharoni,
and Ido Dagan. 2019. Diversify your data-
conjuntos: Analyzing generalization via controlled
variance in adversarial datasets. En curso-
cosas de
the 23rd Conference on Comput-
ational Natural Language Learning (CONLL),
pages 196–205. DOI: https://doi.org
/10.18653/v1/K19-1019

Rico Sennrich, Barry Haddow, and Alexandra
Birch. 2016. Neural machine translation of rare
words with subword units. En procedimientos de
the 54th Annual Meeting of the Association
(Volumen 1:
para Lingüística Computacional
1715–1725. DOI:
Artículos largos),
https://doi.org/10.18653/v1/P16-1162

paginas

Vered Shwartz and Ido Dagan. 2019. Still a
pain in the neck: Evaluating text represent-
ations on lexical composition. In Transactions
de
the Association for Computational Lin-
guísticos (TACL). DOI: https://doi.org
/10.1162/tacl a 00277

Robyn Speer, Joshua Chin, and Catherine Havasi.
2017. Conceptnet 5.5: An open multilingual
graph of general knowledge. In Thirty-First
Conferencia AAAI sobre Inteligencia Artificial.

A. Talmor and J. tarde. 2018. The web
as knowledge-base for answering complex
preguntas. In North American Association for
Ligüística computacional (NAACL).

A. Talmor,

j. Sentido, norte. Lourie,

y
j. tarde. 2019. Commonsenseqa: A question
answering challenge targeting commonsense
conocimiento. In North American Association for
Ligüística computacional (NAACL).

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
3
4
2
1
9
2
3
7
1
6

/

/
t

yo

a
C
_
a
_
0
0
3
4
2
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Ian Tenney, Dipanjan Das, and Ellie Pavlick.
2019a, Jul. BERT rediscovers the classical NLP
pipeline. In Proceedings of the 57th Annual
Meeting of the Association for Computational
Lingüística, pages 4593–4601. Florencia, Italia.
Asociación de Lingüística Computacional.

Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang,
Adam Poliak, R Thomas McCoy, Najoung
kim, Benjamin Van Durme, Sam Bowman,
Dipanjan Das, and Ellie Pavlick. 2019b.
What do you learn from context? Probing
for sentence structure in contextualized word
representaciones. In International Conference on
Learning Representations.

Ashish Vaswani, Noam Shazeer, Niki Parmar,
Jakob Uszkoreit, Leon Jones, Aidan N Gomez,
lucas káiser, y Illia Polosukhin. 2017.
In Advances
Attention is all you need.
en sistemas de procesamiento de información neuronal,
pages 5998–6008.

D. Vrandeˇci´c and M. Kr˝otzsch. 2014. Wikidatos:
A free collaborative knowledgebase. Commu-
nications of the ACM, 57. DOI: https://
doi.org/10.1145/2629489

Eric Wallace, Yizhong Wang, Sujian Li, Sameer
singh, and Matt Gardner. 2019. Do NLP models
know numbers? Probing numeracy in embed-
dings. En Actas de la 2019 Conferencia
sobre métodos empíricos en lenguaje natural
Procesamiento y IX Conjunción Internacional
Conferencia sobre procesamiento del lenguaje natural
(EMNLP-IJCNLP), pages 5310–5318. DOI:
https://doi.org/10.18653/v1/D19-1534

Mingzhe Wang, Yihe Tang, Jian Wang, and Jia
Deng. 2017. Premise selection for theorem
proving by deep graph embedding, I. Guyon,
Ud.. V. Luxburg, S. bengio, h. Wallach,
R. Fergus, S. Vishwanathan, y r. Garnett,
Información
editores, Avances
Sistemas de procesamiento 30, pages 2786–2796.
Asociados Curran, Cª.

in Neural

analysis methods with NPIS.

En
Five
el 2019 Conferencia sobre
Actas de
Empirical Methods
in Natural Language
Procesamiento y IX Conjunción Internacional
Conferencia sobre procesamiento del lenguaje natural
(EMNLP-IJCNLP), pages 2870–2880. DOI:
https://doi.org/10.18653/v1/D19-1286

Johannes Welbl,

Stenetorp,

y
Pontus
Sebastián Riedel. 2018. Constructing datasets
for multi-hop reading comprehension across
documentos. Transactions of the Association for
Ligüística computacional, 6287–302. DOI:
https://doi.org/10.1162/tacl a 00021

Yiben Yang, Larry Birnbaum, Ji-Ping Wang,
and Doug Downey.
2018a. Extracting
commonsense properties from embeddings with
limited human guidance. En procedimientos de
the 56th Annual Meeting of the Association
(Volumen 2:
para Lingüística Computacional
644–649. DOI:
Short
paginas
https://doi.org/10.18653/v1/P18-2102

Documentos),

z. Cual, PAG. chi, S. zhang, Y. bengio, W.. W..
cohen, R. Salakhutdinov, and C. D. Manning.
2018b. HotpotQA: A dataset for diverse, ex-
plainable multi-hop question answering. En
in Natural Language
Empirical Methods
Procesando (EMNLP). DOI: https://doi
.org/10.18653/v1/D18-1259, PMCID:
PMC6156886

Zhilin Yang, Zihang Dai, Yiming Yang, Jaime
Carbonell, Russ R. Salakhutdinov, and Quoc V.
Le. 2019. Xlnet: Generalized autoregressive
En
pretraining for
Advances in neural
procesamiento de información
sistemas, pages 5753–5763.

language understanding.

D. Yogatama, C. de M. d’Autume, j. Connor,
t. Kocisky, METRO. Chrzanowski, l. kong,
A. Lazaridou, W.. Abadejo, l. Yu, C. Dyer, y
Phil Blunson. 2019. Learning and evaluating
general linguistic intelligence. arXiv preprint
arXiv:1901.11373.

Alex Warstadt, Yu Cao,

Ioana Grosu, Wei
Peng, Hagen Blix, Yining Nie, Anna
Alsop, Shikha Bordia, Haokun Liu, Alicia
Parrish, Sheng-Fu Wang,
Jason Phang,
Anhad Mohananey, Phu Mon Htut, Paloma
Jeretiˇc,
and Samuel R. Bowman. 2019.
Investigating BERTs knowledge of language:

Rowan Zellers, Yonatan Bisk, Roy Schwartz,
and Yejin Choi. 2018. Swag: Un gran-
scale adversarial dataset for grounded com-
monsense inference. En procedimientos de
el
2018 Conference on Empirical Methods in
Natural Language Processing (EMNLP). DOI:
https://doi.org/10.18653/v1/D18-1009

758

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
3
4
2
1
9
2
3
7
1
6

/

/
t

yo

a
C
_
a
_
0
0
3
4
2
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3oLMpics-On What Language Model Pre-training Captures image
oLMpics-On What Language Model Pre-training Captures image
oLMpics-On What Language Model Pre-training Captures image
oLMpics-On What Language Model Pre-training Captures image

Descargar PDF