How Can We Know What Language Models Know?

How Can We Know What Language Models Know?

Zhengbao Jiang1∗ Frank F. Xu1∗

Jun Araki2 Graham Neubig1

1Language Technologies Institute, Carnegie Mellon University
2Bosch Research North America

{zhengbaj,fangzhex,gneubig}@cs.cmu.edu

jun.araki@us.bosch.com

Abstracto

Recent work has presented intriguing results
examining the knowledge contained in lan-
guage models (LMs) by having the LM fill
in the blanks of prompts such as ‘‘Obama
es un
by profession’’. These prompts are
usually manually created, and quite possibly
sub-optimal; another prompt such as ‘‘Obama
worked as a
’’ may result in more accurately
predicting the correct profession. Because of
este, given an inappropriate prompt, Podríamos
fail to retrieve facts that the LM does know,
and thus any given prompt only provides a
lower bound estimate of the knowledge con-
tained in an LM. en este documento, we attempt to
more accurately estimate the knowledge con-
tained in LMs by automatically discovering
better prompts to use in this querying process.
Específicamente, we propose mining-based and
paraphrasing-based methods to automatically
generate high-quality and diverse prompts,
as well as ensemble methods to combine
answers from different prompts. Extensive
experiments on the LAMA benchmark for
extracting relational knowledge from LMs
demonstrate that our methods can improve
accuracy from 31.1% a 39.6%, providing
a tighter lower bound on what LMs know.
We have released the code and the resulting
LM Prompt And Query Archive (LPAQA) en
https://github.com/jzbjyb/LPAQA.

1 Introducción

Recent years have seen the primary role of lan-
guage models (LMs) transition from generating
or evaluating the fluency of natural text (Mikolov
and Zweig, 2012; Merity et al., 2018; Melis et al.,
2018; Gamon et al., 2005) to being a powerful
tool for text understanding. This understanding has
mainly been achieved through the use of language
modeling as a pre-training task for feature extrac-
tores, where the hidden vectors learned through a
language modeling objective are then used in

∗ The first two authors contributed equally.

423

down-stream language understanding systems
(Dai and Le, 2015; Melamud et al., 2016; Peters
et al., 2018; Devlin et al., 2019).

Curiosamente, it is also becoming apparent that
LMs1 themselves can be used as a tool for text
understanding by formulating queries in natural
language and either generating textual answers
directly (McCann et al., 2018; Radford et al.,
2019), or assessing multiple choices and picking
the most likely one (Zweig and Burges, 2011;
Rajani et al., 2019). Por ejemplo, LMs have been
used to answer factoid questions (Radford et al.,
2019), answer common sense queries (Trinh and
Le, 2018; Sap et al., 2019), or extract factual
knowledge about
relations between entities
(Petroni et al., 2019; Baldini Soares et al.,
2019). Regardless of the end task, the knowledge
contained in LMs is probed by providing a prompt,
and letting the LM either generate the continuation
of a prefix (p.ej., ‘‘Barack Obama was born in ’’),
or predict missing words in a cloze-style template
(p.ej., ‘‘Barack Obama is a

by profession’’).

Sin embargo, while this paradigm has been used to
achieve a number of intriguing results regarding
the knowledge expressed by LMs, they usually
rely on prompts that were manually created
based on the intuition of the experimenter. Estos
manually created prompts (p.ej., ‘‘Barack Obama
was born in
'') might be sub-optimal because
LMs might have learned target knowledge from
substantially different contexts (p.ej., ‘‘The birth
place of Barack Obama is Honolulu, Hawaii.’’)
during their training. Thus it is quite possible that
a fact that the LM does know cannot be retrieved
due to the prompts not being effective queries
for the fact. De este modo, existing results are simply a
lower bound on the extent of knowledge contained

1Some models we use in this paper, p.ej., BERT (Devlin
et al., 2019), are bi-directional, and do not directly define
probability distribution over text, which is the underlying
definition of an LM. Sin embargo, we call them LMs for
simplicity.

Transacciones de la Asociación de Lingüística Computacional, volumen. 8, páginas. 423–438, 2020. https://doi.org/10.1162/tacl a 00324
Editor de acciones: Timothy Baldwin. Lote de envío: 12/2019; Lote de revisión: 3/2020; Publicado 7/2020.
C(cid:13) 2020 Asociación de Lingüística Computacional. Distribuido bajo CC-BY 4.0 licencia.

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
3
2
4
1
9
2
3
8
6
7

/

/
t

yo

a
C
_
a
_
0
0
3
2
4
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

BERT-large as well. We further demonstrate that
using a diversity of prompts through ensembling
further improves accuracy to 39.6%. Actuamos
extensive analysis and ablations, gleaning insights
both about how to best query the knowledge
stored in LMs and about potential directions for
incorporating knowledge into LMs themselves.
Finalmente, we have released the resulting LM Prompt
And Query Archive (LPAQA) to facilitate future
experiments on probing knowledge contained in
LMs.

2 Knowledge Retrieval from LMs

Retrieving factual knowledge from LMs is quite
diferente
from querying standard declarative
knowledge bases (KB). In standard KBs, users
formulate their information needs as a structured
query defined by the KB schema and query
idioma. Por ejemplo, SELECCIONAR ?y WHERE
{wd:Q76 wdt:P19 ?y} is a SPARQL query
to search the birth place of Barack Obama. En
contrast, LMs must be queried by natural language
prompts, such as ‘‘Barack Obama was born in ’’,
and the word assigned the highest probability in
the blank will be returned as the answer. A diferencia de
deterministic queries on KBs, this provides no
guarantees of correctness or success.

While the idea of prompts is common to
methods for extracting many varieties of knowl-
edge from LMs, in this paper we specifically
follow the formulation of Petroni et al. (2019),
where factual knowledge is in the form of triples
hx, r, yi. Here x indicates the subject, y indicates
the object, and r is their corresponding relation.
To query the LM, r is associated with a cloze-style
prompt tr consisting of a sequence of tokens, two
of which are placeholders for subjects and objects
(p.ej., ‘‘x plays at y position’’). The existence of
the fact in the LM is assessed by replacing x with
the surface form of the subject, and letting the
model predict the missing object (p.ej., ‘‘LeBron
position’’):2
James plays at

ˆy = arg max
y′∈V

PLM(y′|X, tr),

2We can also go the other way around by filling in the
objects and predicting the missing subjects. Since our focus
is on improving prompts, we choose to be consistent with
Petroni et al. (2019) to make a fair comparison, and leave
exploring other settings to future work. Also notably, Petroni
et al. (2019) only use objects consisting of a single token, entonces
we only need to predict one word for the missing slot.

Cifra 1: Top-5 predictions and their log probabilities
using different prompts (manual, mined, and para-
phrased) to query BERT. Correct answer is underlined.

in LMs, and in fact, LMs may be even more
knowledgeable than these initial results indicate.
In this paper we ask the question: ‘‘How can
we tighten this lower bound and get a more
accurate estimate of the knowledge contained in
state-of-the-art LMs?’’ This is interesting both
scientifically, as a probe of the knowledge that
LMs contain, and from an engineering perspective,
as it will result in higher recall when using LMs
as part of a knowledge extraction system.

En particular, we focus on the setting of Petroni
et al. (2019) who examine extracting knowledge
regarding the relations between entities (defini-
tions in § 2). We propose two automatic methods
to systematically improve the breadth and quality
of the prompts used to query the existence of a
relation (§ 3). Específicamente, as shown in Figure 1,
these are mining-based methods inspired by pre-
vious relation extraction methods (Ravichandran
and Hovy, 2002), and paraphrasing-based meth-
ods that take a seed prompt (either manually
created or automatically mined), and paraphrase
it into several other semantically similar expres-
siones. Más, because different prompts may
work better when querying for different subject-
object pairs, we also investigate lightweight
ensemble methods to combine the answers from
different prompts together (§ 4).

We experiment on the LAMA benchmark
(Petroniet al., 2019), which is an English-language
benchmark devised to test the ability of LMs to
retrieve relations between entities (§ 5). We first
demonstrate that improved prompts significantly
improve accuracy on this task, with the one-best
prompt extracted by our method raising accuracy
de 31.1% a 34.1% on BERT-base (Devlin et al.,
2019), with similar gains being obtained with

424

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
3
2
4
1
9
2
3
8
6
7

/

/
t

yo

a
C
_
a
_
0
0
3
2
4
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

where V is the vocabulary, and PLM(y′|X, tr)
is the LM probability of predicting y′
en el
blank conditioned on the other tokens (es decir., el
subject and the prompt).3 We say that an LM has
knowledge of a fact if ˆy is the same as the ground-
truth y. Because we would like our prompts to
most effectively elicit any knowledge contained
in the LM itself, a ‘‘good’’ prompt should trigger
the LM to predict the ground-truth objects as often
as possible.

In previous work (McCann et al., 2018; Radford
et al., 2019; Petroni et al., 2019), tr has been
a single manually defined prompt based on the
intuition of the experimenter. As noted in the
introducción, this method has no guarantee of
being optimal, and thus we propose methods that
learn effective prompts from a small set of training
data consisting of gold subject-object pairs for
each relation.

3 Prompt Generation

Primero, we tackle prompt generation: the task of gen-
erating a set of prompts {tr,i}t
i=1 for each relation
r, where at least some of the prompts effectively
trigger LMs to predict ground-truth objects. Nosotros
employ two practical methods to either mine
prompt candidates from a large corpus (§ 3.1)
or diversify a seed prompt through paraphrasing
(§ 3.2).

3.1 Mining-based Generation

first method is

Nuestro
inspired by template-
based relation extraction methods (Agichtein and
Gravano, 2000; Ravichandran and Hovy, 2002),
which are based on the observation that words
in the vicinity of the subject x and object y in a
large corpus often describe the relation r. Basado
on this intuition, we first identify all the Wikipedia
sentences that contain both subjects and objects
of a specific relation r using the assumption of
distant supervision, then propose two methods to
extract prompts.

Middle-word Prompts Following the observa-
tion that words in the middle of the subject and
object are often indicative of the relation, nosotros

3We restrict to masked LMs in this paper because the
missing slot might not be the last token in the sentence and
computing this probability in traditional left-to-right LMs
using Bayes’ theorem is not tractable.

directly use those words as prompts. Para examen-
por ejemplo, ‘‘Barack Obama was born in Hawaii’’ is
converted into a prompt ‘‘x was born in y’’
by replacing the subject and the object with
marcadores de posición.

Dependency-based Prompts Toutanova et al.
(2015) note that in cases of templates where words
do not appear in the middle (p.ej., ‘‘The capital of
France is Paris’’), templates based on syntactic
analysis of the sentence can be more effective for
relation extraction. We follow this insight in our
second strategy for prompt creation, which parses
sentences with a dependency parser to identify
the shortest dependency path between the subject
and object, then uses the phrase spanning from
the leftmost word to the rightmost word in the
dependency path as a prompt. Por ejemplo, el
dependency path in the above example is ‘‘France
nsubj
pobj
←−− is attr
−−→ Paris’’, dónde
←−− of
the leftmost and rightmost words are ‘‘capital’’
and ‘‘Paris’’, giving a prompt of ‘‘capital of x is
y’’.

prep
←−− capital

Notablemente, these mining-based methods do not
rely on any manually created prompts, and can
thus be flexibly applied to any relation where we
can obtain a set of subject-object pairs. This will
result in diverse prompts, covering a wide variety
of ways that the relation may be expressed in text.
Sin embargo, it may also be prone to noise, as many
prompts acquired in this way may not be very
indicative of the relation (p.ej., ‘‘x, y’’), even if
they are frequent.

3.2 Paraphrasing-based Generation

Our second method for generating prompts is more
targeted—it aims to improve lexical diversity
while remaining relatively faithful to the original
prompt. Específicamente, we do so by performing
paraphrasing over the original prompt into other
semantically similar or identical expressions. Para
ejemplo, if our original prompt is ‘‘x shares a
border with y’’, it may be paraphrased into ‘‘x
has a common border with y’’ and ‘‘x adjoins y’’.
This is conceptually similar to query expansion
techniques used in information retrieval that refor-
mulate a given query to improve retrieval perfor-
mance (Carpineto and Romano, 2012).

Although many methods could be used for
paraphrasing (Romano et al., 2006; Bhagat and
Ravichandran, 2008), we follow the simple

425

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
3
2
4
1
9
2
3
8
6
7

/

/
t

yo

a
C
_
a
_
0
0
3
2
4
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

method of using back-translation (Sennrich et al.,
2016; Mallinson et al., 2017) to first translate
the initial prompt into B candidates in another
idioma, each of which is then back-translated
into B candidates in the original language. Nosotros
then rank B2 candidates based on their round-
trip probability (es decir., Pforward(¯t|ˆt ) · Pbackward(t|¯t ),
where ˆt is the initial prompt, ¯t is the translated
prompt in the other language, and t is the final
prompt), and keep the top T prompts.

4 Prompt Selection and Ensembling

In the previous section, we described methods to
generate a set of candidate prompts {tr,i}t
i = 1 para
a particular relation r. Each of these prompts may
be more or less effective at eliciting knowledge
from the LM, and thus it is necessary to decide
how to use these generated prompts at test time. En
this section, we describe three methods to do so.

probabilities4 from the top K prompts to calculate
the probability of the object:

s(y|X, r) =

k

X
yo=1

1
k

log PLM(y|X, tr,i),

PAG (y|X, r) = softmax(s(·|X, r))y,

(1)

(2)

where tr,i is the prompt ranked at the i-th position.
Aquí, K is a hyper-parameter, where a small K
focuses on the few most accurate prompts, y un
large K increases diversity of the prompts.

4.3 Optimized Ensemble

The above method treats the top K prompts
equally, which is sub-optimal given some prompts
are more reliable than others. De este modo, nosotros también
propose a method that directly optimizes prompt
weights. Formalmente, we re-define the score in
Ecuación 1 como:

t

4.1 Top-1 Prompt Selection

s(y|X, r) =

Pθr(tr,i|r) log PLM(y|X, tr,i),

For each prompt, we can measure its accuracy of
predicting the ground-truth objects (on a training
conjunto de datos) usando:

A(tr,i) =

Phx,yi∈R δ(y=arg maxy′ PLM(y′|X,tr,i))
|R|

,

where R is a set of subject-object pairs with
relation r, and δ(·) is Kronecker’s delta function,
returning 1 if the internal condition is true and
0 de lo contrario. In the simplest method for querying
the LM, we choose the prompt with the highest
accuracy and query using only this prompt.

4.2 Rank-based Ensemble

Next we examine methods that use not only
the top-1 prompt, but combine together multiple
prompts. The advantage to this is that the LM may
have observed different entity pairs in different
contexts within its training data, y teniendo
a variety of prompts may allow for elicitation
of knowledge that appeared in these different
contextos.

Our first method for ensembling is a parameter-
free method that averages the predictions of the
the prompts
top-ranked prompts. We rank all
based on their accuracy of predicting the objects
on the training set, and use the average log

X
yo=1

(3)
where Pθr (tr,i|r) = softmax(θr) is a distribution
over prompts parameterized by θr, a T -sized real-
value vector. For every relation, we learn to score
a different set of T candidate prompts, so the
total number of parameters is T times the number
of relations. The parameter θr is optimized to
maximize the probability of the gold-standard
objects P (y|X, r) over training data.

5 Main Experiments

5.1 Experimental Settings

En esta sección, we assess the extent to which our
prompts can improve fact prediction performance,
raising the lower bound on the knowledge we
discern is contained in LMs.

Dataset As data, we use the T-REx subset
(ElSahar et al., 2018) of the LAMA benchmark
(Petroni et al., 2019), which has a broader set
de 41 relaciones (compared with the Google-RE
subset, which only covers 3). Each relation is
associated with at most 1000 subject-object pairs
from Wikidata, and a single manually designed

4Intuitivamente, because we are combining together scores in
the log space, this has the effect of penalizing objects that are
very unlikely given any certain prompt in the collection. Nosotros
also compare with linear combination in ablations in § 5.3.

426

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
3
2
4
1
9
2
3
8
6
7

/

/
t

yo

a
C
_
a
_
0
0
3
2
4
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

prompt. To learn to mine prompts (§ 3.1), rank
prompts (§ 4.2), or learn ensemble weights (§ 4.3),
we create a separate training set of subject-object
pairs also from Wikidata for each relation that has
no overlap with the T-REx dataset. We denote the
training set as T-REx-train. For consistency with
the T-REx dataset in LAMA, T-REx-train also is
chosen to contain only single-token objects. A
investigate the generality of our method, nosotros también
report the performance of our methods on the
Google-RE subset,5 which takes a similar form
to T-REx but is relatively small and only covers
three relations.

P¨orner et al. (2019) note that some facts in
LAMA can be recalled solely based on surface
forms of entities, without memorizing facts. Ellos
filter out those easy-to-guess facts and create a
more difficult benchmark, denoted as LAMA-
UHN. We also conduct experiments on the T-REx
subset of LAMA-UHN (es decir., T-REx-UHN) a
investigate whether our methods can still obtain
improvements on this harder benchmark. Dataset
statistics are summarized in Table 1.

Models As for the models to probe, in our main
experiments we use the standard BERT-base and
BERT-large models (Devlin et al., 2019). Nosotros
also perform some experiments with other pre-
trained models enhanced with external entity
representaciones, a saber, ERNIE (Zhang et al.,
2019) and KnowBert (Peters et al., 2019), cual
we believe may do better on recall of entities.

Evaluation Metrics We use two metrics to
evaluate the success of prompts in probing
LMs. The first evaluation metric, micro-averaged
exactitud, follows the LAMA benchmark6 in
calculating the accuracy of all subject-object pairs
for relation r:

1
|R| X
hx,yi∈R

δ(ˆy = y),

where ˆy is the prediction and y is the ground
truth. Then we average across all
relaciones.
Sin embargo, we found that the object distributions

5https://code.google.com/archive/p/

relation-extraction-corpus/.

6In LAMA, it is called ‘‘P@1.’’ There might be multiple
correct answers for some cases, p.ej., a person speaking
multiple languages, but we only use one ground truth. Nosotros
will leave exploring more advanced evaluation methods to
future work.

Properties

T-REx T-REx-UHN T-REx-train

#sub-obj pairs
#unique subject
#unique objects
object entropy

830.2
767.8
150.9
3.6

661.1
600.8
120.5
3.4

948.7
880.1
354.6
4.4

Mesa 1: Dataset statistics. Todo
averaged across 41 relaciones.

the values are

alguno

relaciones

extremely

skewed
son
de
(p.ej., more than half of the objects in relation
native language are French). This can lead
to deceptively high scores, even for a majority-
class baseline that picks the most common object
for each relation, which achieves a score of 22.0%.
To mitigate this problem, we also report macro-
averaged accuracy, which computes accuracy for
each unique object separately, then averages them
together to get the relation-level accuracy:

1

|uni obj(R)| X

y′∈uni obj(R)

Phx,yi∈R,y = y′ δ(ˆy = y)
|{y|hx, yi ∈ R, y = y′}|

,

where uni obj(R) returns a set of unique objects
from relation r. This is a much stricter metric,
with the majority-class baseline only achieving a
score of 2.2%.

Methods We attempted different methods for
prompt generation and selection/ensembling, y
compare them with the manually designed
prompts used in Petroni et al. (2019). Majority
refers to predicting the majority object for each
relation, as mentioned above. Man is the baseline
from Petroni et al. (2019) that only uses the
manually designed prompts for retrieval. Mine
(§ 3.1) uses the prompts mined from Wikipedia
through both middle words and dependency paths,
and Mine+Man combines them with the manual
prompts. Mine+Para (§ 3.2) paraphrases the
highest-ranked mined prompt for each relation,
while Man+Para uses the manual one instead.

The prompts are combined either by averaging
the log probabilities from the TopK highest-
ranked prompts (§ 4.2) or the weights after
optimization (§ 4.3; Opti.). Oracle represents the
upper bound of the performance of the generated
prompts, where a fact is judged as correct if any
one of the prompts allows the LM to successfully
predict the object.

Implementation Details We use T = 40 mayoría
frequent prompts either generated through mining

427

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
3
2
4
1
9
2
3
8
6
7

/

/
t

yo

a
C
_
a
_
0
0
3
2
4
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

or paraphrasing in all experiments, and the number
of candidates in back-translation is set to B = 7.
We remove prompts only containing stopwords/
punctuations or longer than 10 words to reduce
ruido. We use the round-trip English-German
neural machine translation models pre-trained on
WMT’19 (Ng et al., 2019) for back-translation,
as English-German is one of the most highly
resourced language pairs.7 When optimizing
ensemble parameters, we use Adam (Kingma and
Ba, 2015) with default parameters and batch size
de 32.

Prompts

Mine
Mine+Man
Mine+Para
Man+Para

Mine
Mine+Man
Mine+Para
Man+Para

34.2
35.9
34.0
35.8

38.9
39.6
36.2
37.3

34.7
35.1
34.5
36.6

Top1 Top3 Top5 Opti. Oracle
BERT-base (Man=31.1)
31.4
31.6
32.7
34.1
BERT-large (Man=32.3)
37.0
39.4
37.8
35.9

54.4
56.1
51.8
50.0

43.7
43.9
40.1
38.8

37.0
40.6
38.6
37.3

36.4
38.4
38.6
38.0

50.7
52.6
48.1
47.9

5.2 Evaluation Results

Micro- and macro-averaged accuracy of different
methods are reported in Tables 2 y 3, respetar-
activamente.

Single Prompt Experiments When only one
prompt is used (in the first Top1 column in both
tables), the best of the proposed prompt genera-
tion methods increases micro-averaged accuracy
de 31.1% a 34.1% on BERT-base, and from
32.3% a 39.4% on BERT-large. This demon-
strates that
the manually created prompts are
a somewhat weak lower bound; there are other
prompts that further improve the ability to query
knowledge from LMs. Mesa 4 shows some
of the mined prompts that resulted in a large
performance gain compared with the manual ones.
For the relation religion, ‘‘x who converted
to y’’ improved 60.0% over the manually defined
prompt of ‘‘x is affiliated with the y religion’’,
and for the relation subclass of, ‘‘x is a type
of y’’ raised the accuracy by 22.7% over ‘‘x is
a subclass of y’’. It can be seen that the largest
gains from using mined prompts seem to occur
in cases where the manually defined prompt is
more complicated syntactically (p.ej., el primero),
or when it uses less common wording (p.ej., el
latter) than the mined prompt.

Prompt Ensembling Next we turn to experi-
ments that use multiple prompts to query the LM.
Comparing the single-prompt results in column 1
to the ensembled results in the following three
columnas, we can see that ensembling multiple
prompts almost always leads to better perform-
ance. The simple average used in Top3 and

7https://github.com/pytorch/fairseq/tree/

master/examples/wmt19.

Mesa 2: Micro-averaged accuracy of different
methods (%). Majority gives us 22.0%. Italic
indicates best single-prompt accuracy, and bold
indicates the best non-oracle accuracy overall.

Prompts

Mine
Mine+Man
Mine+Para
Man+Para

Mine
Mine+Man
Mine+Para
Man+Para

23.9
24.8
23.0
24.6

25.7
26.6
23.6
25.0

22.7
23.8
22.4
23.8

Top1 Top3 Top5 Opti. Oracle
BERT-base (Man=22.8)
20.7
21.3
21.2
22.8
BERT-large (Man=25.7)
26.4
28.1
26.2
25.9

36.2
38.0
34.1
34.9

40.7
42.2
38.3
39.3

26.3
28.3
27.1
27.8

30.1
30.7
27.1
28.0

25.9
27.3
27.0
28.3

Mesa 3: Macro-averaged accuracy of different
methods (%). Majority gives us 2.2%. Italic
indicates best single-prompt accuracy, and bold
indicates the best non-oracle accuracy overall.

Top5 outperforms Top1 across different prompt
generation methods. The optimized ensemble fur-
ther raises micro-averaged accuracy to 38.9% y
43.7% on BERT-base and BERT-large respec-
activamente, outperforming the rank-based ensemble by
a large margin. These two sets of results demon-
strate that diverse prompts can indeed query the
LM in different ways, and that the optimization-
based method is able to find weights that
effectively combine different prompts together.

We list the learned weights of top-3 mined
prompts and accuracy gain over only using
the top-1 prompt in Table 5. Weights tend to
concentrate on one particular prompt, and the other
prompts serve as complements. We also depict the
performance of the rank-based ensemble method

428

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
3
2
4
1
9
2
3
8
6
7

/

/
t

yo

a
C
_
a
_
0
0
3
2
4
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

ID

Relaciones

Manual Prompts

Mined Prompts

Acc. Ganar

place of death

x is affiliated with the y religion x who converted to y

P140 religion
P159 headquarters location The headquarter of x is in y
P20
P264 record label
P279 subclass of
P39

x died in y
x is represented by music label y x recorded for y
x is a type of y
x is a subclass of y
x is elected y
x has the position of y

x is based in y
x died at his home in y

position held

+60.0
+4.9
+4.6
+17.2
+22.7
+7.9

Mesa 4: Micro-averaged accuracy gain (%) of the mined prompts over the manual prompts.

ID

Relaciones

Prompts and Weights

owned by
religión

P127
P140
P176 manufacturer

x is owned by y .485 x was acquired by y .151 x division of y .151
x who converted to y .615 y tirthankara x .190 y dedicated to x .110
y introduced the x .594 y announced the x .286 x attributed to the y .111

Acc. Ganar

+7.0
+12.2
+7.0

Mesa 5: Weights of top-3 mined prompts, and the micro-averaged accuracy gain (%) over using the
top-1 prompt.

with respect to the number of prompts in Figure 2.
For mined prompts, top-2 or top-3 usually gives
us the best results, while for paraphrased prompts,
top-5 is the best. Incorporating more prompts does
not always improve accuracy, a finding consistent
with the rapidly decreasing weights learned by
the optimization-based method. The gap between
Oracle and Opti. indicates that there is still space
for improvement using better ensemble methods.

Mining vs. Paraphrasing For the rank-based
ensembles (Top1, 3, 5), prompts generated
by paraphrasing usually perform better
than
mined prompts, while for the optimization-based
ensemble (Opti.), mined prompts perform better.
We conjecture this is because mined prompts
exhibit more variation compared to paraphrases,
and proper weighting is of central importance.
This difference in the variation can be observed in
the average edit distance between the prompts
of each class, cual es 3.27 y 2.73 para
mined and paraphrased prompts respectively.
Sin embargo, the improvement led by ensembling
paraphrases is still significant over just using
one prompt (Top1 vs. Opti.), raising micro-
averaged accuracy from 32.7% a 36.2% en
BERT-base, and from 37.8% a 40.1% on BERT-
grande. This indicates that even small modifications
to prompts can result in relatively large changes in
predicciones. Mesa 6 demonstrates cases where
modification of one word (either function or
to significant accuracy
content word)

leads

429

Cifra 2: Performance for different top-K ensembles.

ID

Modifications

Acc. Ganar

P413
P495
P495
P361
P413

x plays in→at y position
x was created→made in y
x was→is created in y
x is a part of y
x plays in y position

+23.2
+10.8
+10.0
+2.7
+2.2

Mesa 6: Small modifications (update, insert,
and delete) in paraphrase lead to large accuracy
gain (%).

improvements, indicating that large-scale LMs
are still brittle to small changes in the ways they
are queried.

Middle-word vs. Dependency-based We com-
pare the performance of only using middle-
word prompts and concatenating them with
dependency-based prompts in Table 7. El

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
3
2
4
1
9
2
3
8
6
7

/

/
t

yo

a
C
_
a
_
0
0
3
2
4
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Prompts Top1 Top3 Top5 Opti. Oracle

Mid
Mid+Dep

30.7
31.4

32.7
34.2

31.2
34.7

36.9
38.9

45.1
50.7

Mesa 7: Ablation study of middle-word and
dependency-based prompts on BERT-base.

Modelo

Man Mine

Mine Mine Man
+Man +Para +Para

BERT
ERNIE
KnowBert

31.1
32.1
26.2

38.9
42.3
34.1

39.6
43.8
34.6

36.2
40.1
31.9

37.3
41.1
32.1

Mesa 8: Micro-averaged accuracy (%) of various
LMs

improvements confirm our intuition that words
belonging to the dependency path but not in the
middle of the subject and object are also indicative
of the relation.

Micro vs. Macro Comparing Tables 2 y
3, we can see that macro-averaged accuracy is
than micro-averaged accuracy,
much lower
indicating that macro-averaged accuracy is a
more challenging metric that evaluates how many
unique objects LMs know. Our optimization-
based method improves macro-averaged accuracy
de 22.8% a 25.7% on BERT-base, y
de 25.7% a 30.1% on BERT-base. Este
again confirms the effectiveness of ensembling
multiple prompts, but the gains are somewhat
smaller. Notablemente,
in our optimization-based
methods,
the ensemble weights are optimized
on each example in the training set, cual es
more conducive to optimizing micro-averaged
exactitud. Optimization to improve macro-
averaged accuracy is potentially an interesting
en
direction for future work that may result
prompts more generally applicable to different
types of objects.

Performance of Different LMs
En mesa 8,
we compare BERT with ERNIE and KnowBert,
which are enhanced with external knowledge
by explicitly incorporating entity embeddings.
ERNIE outperforms BERT by 1 point even
with the manually defined prompts, but our
prompt generation methods further emphasize
the difference between the two methods, con
the highest accuracy numbers differing by 4.2
points using the Mine+Man method. Este

Modelo

Man Mine

Mine Mine Man
+Man +Para +Para

BERT-base
21.3
BERT-large 24.2

28.7
34.5

29.4
34.5

26.8
31.6

27.0
29.8

Mesa 9: Micro-averaged accuracy (%) en
LAMA-UHN.

Modelo

Man Mine

Mine Mine Man
+Man +Para +Para

BERT-base
9.8
BERT-large 10.5

10.0
10.6

10.4
11.3

9.6
10.4

10.0
10.7

Mesa 10: Micro-averaged accuracy (%) en
Google-RE.

indicates that
if LMs are queried effectively,
the differences between highly performant
models may become more clear. KnowBert
underperforms BERT on LAMA, cual
is opposite to the observation made in Peters et al.
(2019). This is probably because that multi token
subjects/objects are used to evaluate KnowBert in
Peters et al. (2019), while LAMA contains only
single-token objects.

LAMA-UHN Evaluation The performances
on LAMA-UHN benchmark are reported in
Mesa 9. Although the overall performances drop
dramatically compared to the performances on the
original LAMA benchmark (Mesa 2), optimized
ensembles can still outperform manual prompts
by a large margin, indicating that our methods are
effective in retrieving knowledge that cannot be
inferred based on surface forms.

5.3 Análisis

Próximo, we perform further analysis to better
understand what type of prompts proved most
suitable for facilitating retrieval of knowledge
from LMs.

Prediction Consistency by Prompt We first
analyze the conditions under which prompts
will yield different predictions. We define the
divergence between predictions of two prompts
tr,i and tr,j using the following equation:

Div(tr,i, tr,j) =

Phx,yi∈R δ(C(X, y, tr,i) 6= C(X, y, tr,j ))
|R|

,

where C(X, y, tr,i) = 1 if prompt tr,i can
successfully predict y and 0 de lo contrario, and δ(·) es

430

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
3
2
4
1
9
2
3
8
6
7

/

/
t

yo

a
C
_
a
_
0
0
3
2
4
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
3
2
4
1
9
2
3
8
6
7

/

/
t

yo

a
C
_
a
_
0
0
3
2
4
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Cifra 3: Correlation of edit distance between prompts
and their prediction divergence.

Cifra 4: Ranking position distribution of prompts with
diferentes patrones. Lower is better.

x/y V y/x

| x/y V P y/x

| x/y V W* P y/x

V = verb particle? adv?
W = (noun | adj | adv | pron | det)
P = (prep | particle | inf. marker)

Mesa 11: Three part-of-speech-based regular
expressions used in ReVerb to identify relational
phrases.

Kronecker’s delta. For each relation, we normalize
the edit distance of two prompts into [0, 1] y
bucket the normalized distance into five bins
with intervals of 0.2. We plot a box chart for
each bin to visualize the distribution of prediction
divergence in Figure 3, with the green triangles
representing mean values and the green bars in
the box representing median values. As the edit
distance becomes larger, the divergence increases,
which confirms our intuition that very different
prompts tend to cause different prediction results.
The Pearson correlation coefficient is 0.25, cual
shows that there is a weak correlation between
these two quantities.

Performance on Google-RE We also report
the performance of optimized ensemble on the
Google-RE subset in Table 10. De nuevo, ensembling
diverse prompts improves accuracies for both the
BERT-base and BERT-large models. The gains
are somewhat smaller than those on the T-REx
subset, which might be caused by the fact that there
are only three relations and one of them (predicting
the birth date of a person) is particularly hard
to the extent that only one prompt yields non-zero
exactitud.

431

POS-based Analysis Next, we try to examine
which types of prompts tend to be effective
in the abstract by examining the part-of-speech
(POS) patterns of prompts that successfully
extract knowledge from LMs. In open information
extraction systems (Banko et al., 2007), manually
defined patterns are often leveraged to filter out
noisy relational phrases. Por ejemplo, ReVerb
(Fader et al., 2011) incorporates three syntactic
constraints listed in Table 11 to improve the
coherence and informativeness of
the mined
relational phrases. To test whether these patterns
are also indicative of the ability of a prompt
to retrieve knowledge from LMs, we use these
three patterns to group prompts generated by our
methods into four clusters, where the ‘‘other’’
cluster contains prompts that do not match any
patrón. We then calculate the rank of each
prompt within the extracted prompts, and plot the
distribution of rank using box plots in Figure 4.8
We can see that the average rank of prompts
matching these patterns is better than those in
the ‘‘other’’ group, confirming our intuitions
that good prompts should conform with those
patrones. Some of the best performing prompts’
POS signatures are ‘‘x VBD VBN IN y’’ (p.ej.,
‘‘x was born in y’’) and ‘‘x VBZ DT NN IN y’’
(p.ej., ‘‘x is the capital of y’’).

Cross-model Consistency Finally,
is of
eso
to know whether
interés
we are extracting are highly tailored to a

the prompts

él

8We use the ranking position of a prompt to represent its
quality instead of its accuracy because accuracy distributions
of different relations might span different ranges, haciendo
accuracy not directly comparable across relations.

Prueba
Tren

BERT-base
grande
base

BERT-large
base
grande

Prueba
Tren

BERT

ERNIE

BERT ERNIE ERNIE BERT

Mine
Mine+Man
Mine+Para
Man+Para

38.9
39.6
36.2
37.3

38.7
40.1
35.6
35.6

43.7
43.9
40.1
38.8

42.2
42.2
39.0
37.5

Mine
Mine+Man
Mine+Para
Man+Para

38.9
39.6
36.2
37.3

38.0
39.5
34.2
35.2

42.3
43.8
40.1
41.1

38.7
40.5
39.0
40.3

Mesa 12: Cross-model micro-averaged
exactitud (%). The first row is the model
to test, and the second row is the model
on which prompt weights are learned.

Mesa 13: Cross-model micro-averaged accuracy
(%). The first row is the model to test, y el
second row is the model on which prompt weights
are learned.

specific model, or whether they can generalize
across models. para hacerlo, we use two settings:
One compares BERT-base and BERT-large, el
same model architecture with different sizes;
the other compares BERT-base and ERNIE,
different model architectures with a comparable
tamaño. In each setting, we compare when the
optimization-based ensembles are trained on the
same model, or when they are trained on one
model and tested on the other. As shown in
13, we found that in general
Tables 12 y
there is usually some drop in performance
in the cross-model scenario (third and fifth
columnas), but the losses tend to be small, y
the highest performance when querying BERT-
base is actually achieved by the weights optimized
on BERT-large. Notablemente, the best accuracies of
40.1% y 42.2% (Mesa 12) y 39.5% y
40.5% (Mesa 13) with the weights optimized on
the other model are still much higher than those
obtained by the manual prompts, Indicando que
optimized prompts still afford large gains across
modelos. Another interesting observation is that the
drop in performance on ERNIE (last two columns
en mesa 13) is larger than that on BERT-large
(last two columns in Table 12) using weights
optimized on BERT-base, indicating that models
sharing the same architecture benefit more from
the same prompts.

Linear vs. Log-linear Combination As men-
tioned in § 4.2, we use log-linear combination of
probabilities in our main experiments. Sin embargo, él
is also possible to calculate probabilities through
regular linear interpolation:

PAG (y|X, r) =

k

X
yo=1

1
k

PLM(y|X, tr,i)

(4)

432

Cifra 5: Performance of two interpolation methods.

We compare these two ways to combine pre-
dictions from multiple mined prompts in Figure 5
(§ 4.2). We assume that log-linear combination
outperforms linear combination because log prob-
abilities make it possible to penalize objects that
are very unlikely given any certain prompt.

6 Omitted Design Elements

Finalmente, in addition to the elements of our main
proposed methodology in § 3 and § 4, nosotros
experimented with a few additional methods that
did not prove highly effective, and thus were
omitted from our final design. We briefly describe
these below, along with cursory experimental
resultados.

6.1 LM-aware Prompt Generation

We examined methods to generate prompts by
solving an optimization problem that maximizes
the probability of producing the ground-truth
objects with respect to the prompts:

t∗
r = arg max

tr

PLM(y|X, tr),

where PLM(y|X, tr) is parameterized with a pre-
trained LM. En otras palabras, this method directly
searches for a prompt that causes the LM to assign
ground-truth objects the highest probability.

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
3
2
4
1
9
2
3
8
6
7

/

/
t

yo

a
C
_
a
_
0
0
3
2
4
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Prompts Top1 Top3 Top5 Opti. Oracle

Características

Mine

Paraphrase

antes
después

31.9
30.2

34.5
32.5

33.8
34.7

38.1
37.5

47.9
50.8

Mesa 14: Micro-averaged accuracy (%) antes
and after LM-aware prompt fine-tuning.

macro micro macro micro

forward

+backward

38.1
38.2

25.2
25.5

37.3
37.4

25.0
25.2

Mesa 15: Actuación (%) of using forward
and backward features with BERT-base.

Solving this problem of finding text sequences
that optimize some continuous objective has been
studied both in the context of end-to-end sequence
generación (Hoang et al., 2017), and in the context
of making small changes to an existing input for
adversarial attacks (Ebrahimi et al., 2018; Wallace
et al., 2019). Sin embargo, we found that directly
optimizing prompts guided by gradients was
unstable and often yielded prompts in unnatural
English in our preliminary experiments. De este modo, nosotros
instead resorted to a more straightforward hill-
climbing method that starts with an initial prompt,
then masks out one token at a time and replaces
it with the most probable token conditioned on
the other tokens, inspired by the mask-predict
decoding algorithm used in non-autoregressive
machine translation (Ghazvininejad et al., 2019):9

PLM(Wisconsin|tr \ i) = Phx,yi∈R PLM(Wisconsin|X, tr \ i, y)

|R|

,

where wi is the i-th token in the prompt and tr \ i
is the prompt with the i-th token masked out. Nosotros
followed a simple rule that modifies a prompt from
left to right, and this is repeated until convergence.

We used this method to refine all the mined
and manual prompts on the T-REx-train dataset,
and display theirperformance on the T-REx dataset
en mesa 14. After fine-tuning, the oracle perfor-
mance increased significantly, while the ensemble
performances (both rank-based and optimization-
based) dropped slightly. Este
indicates that
LM-aware fine-tuning has the potential to discover
better prompts, but some portion of the refined
prompts may have over-fit to the training set upon
which they were optimized.

9In theory, this algorithm can be applied to both masked
LMs like BERT and traditional left-to-right LMs, since the
masked probability can be computed using Bayes’ theorem
for traditional LMs. Sin embargo, en la práctica, due to the large size
of vocabulary, it can only be approximated with beam search,
or computed with more complicated continuous optimization
algoritmos (Hoang et al., 2017).

6.2 Forward and Backward Probabilities

el modelo

Finalmente, given class imbalance and the propensity
de
the majority
to over-predict
object, we examine a method to encourage
to predict subject-object pairs that
el modelo
are more strongly aligned.
Inspired by the
maximum mutual information objective used in
Li et al. (2016a), we add the backward log
probability log PLM(X|y, tr,i) of each prompt
to our optimization-based scoring function in
Ecuación 3. Due to the large search space for
objects, we turn to an approximation approach
that only computes backward probability for the
most probable B objects given by the forward
probability at both training and test time. Como
mostrado en la tabla 15, the improvement resulting
from backward probability is small, indicando
that a diversity-promoting scoring function might
not be necessary for knowledge retrieval from
LMs.

7 Trabajo relacionado

Much work has focused on understanding the
internal representations in neural NLP models
(Belinkov and Glass, 2019), either by using
extrinsic probing tasks to examine whether
certain linguistic properties can be predicted
from those representations (Shi et al., 2016;
Linzen et al., 2016; Belinkov et al., 2017),
or by ablations to the models to investigate
how behavior varies (Le et al., 2016b; Herrero
et al., 2017). For contextualized representations
En particular, a broad suite of NLP tasks are
used to analyze both syntactic and semantic
propiedades, providing evidence that contextualized
representations encode linguistic knowledge in
capas (Hewitt and Manning, 2019;
diferente
Tenney et al., 2019a; Tenney et al., 2019b;
Jawahar et al., 2019; Goldberg, 2019).

Different from analyses probing the representa-
tions themselves, our work follows Petroni et al.
(2019); P¨orner et al. (2019) in probing for factual

433

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
3
2
4
1
9
2
3
8
6
7

/

/
t

yo

a
C
_
a
_
0
0
3
2
4
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

conocimiento. They use manually defined prompts,
which may be under-estimating the true perfor-
mance obtainable by LMs. Concurrently to this
trabajar, Bouraoui et al. (2020) made a similar obser-
vation that using different prompts can help better
extract relational knowledge from LMs, pero ellos
use models explicitly trained for relation extrac-
tion whereas our methods examine the knowledge
included in LMs without any additional training.
Orthogonally, some previous works integrate
external knowledge bases so that the language
generation process is explicitly conditioned on
symbolic knowledge (Ahn et al., 2016; Yang et al.,
2017; Logan et al., 2019; Hayashi et al., 2020).
Similar extensions have been applied to pre-trained
LMs like BERT, where contextualized representa-
tions are enhanced with entity embeddings (zhang
et al., 2019; Peters et al., 2019; P¨orner et al., 2019).
A diferencia de, we focus on better knowledge re-
trieval through prompts from LMs as-is, sin
modifying them.

8 Conclusión

en este documento, we examined the importance of
the prompts used in retrieving factual knowledge
from language models. We propose mining-based
and paraphrasing-based methods to systematically
generate diverse prompts to query specific pieces
of relational knowledge. Those prompts, cuando
combined together, improve factual knowledge
retrieval accuracy by 8%, outperforming manually
designed prompts by a large margin. Our analysis
indicates that LMs are indeed more knowledgeable
than initially indicated by previous results, pero
they are also quite sensitive to how we query
a ellos. This indicates potential future directions
como (1) more robust LMs that can be queried
in different ways but still return similar results,
(2) methods to incorporate factual knowledge
in LMs, y (3)
en
further
optimizing methods to query LMs for knowledge.
Finalmente, we have released all our
learned
prompts to the community as the LM Prompt
and Query Archive (LPAQA), available at:
https://github.com/jzbjyb/LPAQA.

improvements

Expresiones de gratitud

Pengcheng Yin, and Shuyan Zhou for
insightful comments and suggestions.

su

Referencias

Eugene Agichtein and Luis Gravano. 2000.
Snowball: Extracting relations from large plain-
text collections. In Proceedings of the Fifth
ACM Conference on Digital Libraries, Junio
2-7, 2000, San Antonio, Texas, EE.UU, pages 85–94.
ACM.

Sungjin Ahn, Heeyoul Choi, Tanel P¨arnamaa,
and Yoshua Bengio. 2016. A neural knowledge
modelo de lenguaje. CORR, abs/1608.00318v2.

Livio Baldini Soares, Nicholas FitzGerald, jeffrey
Abadejo, and Tom Kwiatkowski. 2019. Matching
the blanks: Distributional similarity for relation
aprendiendo. In Proceedings of the 57th Annual
Meeting of the Association for Computational
Lingüística, pages 2895–2905, Florencia, Italia.
Asociación de Lingüística Computacional.

Michele Banko, miguel j.. Cafarella, esteban
Soderland, Matthew Broadhead, and Oren
Etzioni. 2007. Open information extraction
from the web. In IJCAI 2007, Actas
de
the 20th International Joint Conference
sobre Inteligencia Artificial, Hyderabad, India,
Enero 6-12, 2007, pages 2670–2676.

Yonatan Belinkov, Nadir Durrani, Fahim Dalvi,
Hassan Sajjad, and James Glass. 2017. Qué
do neural machine translation models learn
about morphology? En procedimientos de
el
55ª Reunión Anual de la Asociación de
Ligüística computacional (Volumen 1: Largo
Documentos), pages 861–872, vancouver, Canada.
Asociación de Lingüística Computacional.

Yonatan Belinkov and James R. Glass. 2019.
Analysis methods in neural language process-
En g: A survey. Transacciones de la Asociación
para Lingüística Computacional, 7:49–72.

This work was supported by a gift from Bosch
Research and NSF award no. 1815287. We would
like to thank Paul Michel, Hiroaki Hayashi,

Rahul Bhagat and Deepak Ravichandran. 2008.
Large scale acquisition of paraphrases for
En procedimientos
learning surface patterns.

434

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
3
2
4
1
9
2
3
8
6
7

/

/
t

yo

a
C
_
a
_
0
0
3
2
4
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

LCA-08:

de
paginas
Columbus, Ohio. Asociación
Lingüística putacional.

HLT,

674–682,
for Com-

Zied Bouraoui, Jose Camacho-Collados, y
Steven Schockaert. 2020. Inducing relational
knowledge from BERT. In Thirty-Fourth AAAI
Conference on Artificial Intelligence (AAAI),
Nueva York, EE.UU.

Claudio Carpineto and Giovanni Romano. 2012.
A survey of automatic query expansion
in information retrieval. ACM, Informática
Surveys, 44(1):1:1–1:50.

in Neural

Andrew M. Dai and Quoc V. Le. 2015.
In Ad-
Semi-supervised sequence learning.
Information Processing
vances
Sistemas 28: Annual Conference on Neural
Sistemas de procesamiento de información 2015, Decem-
ber 7-12, 2015, Montréal, Quebec, Canada,
pages 3079–3087.

Jacob Devlin, Ming-Wei Chang, Kenton Lee,
and Kristina Toutanova. 2019. BERT: Pre-
training of deep bidirectional transformers for
En procedimientos de
language understanding.
el 2019 Conference of the North American
Chapter of the Association for Computational
Lingüística: Tecnologías del lenguaje humano,
NAACL-HLT 2019, Mineápolis, Minnesota, EE.UU,
Junio 2-7, 2019, Volumen 1 (Long and Short
Documentos), páginas 4171–4186.

text classification.

Javid Ebrahimi, Anyi Rao, Daniel Lowd, y
Dejing Dou. 2018. HotFlip: White-box adver-
sarial examples for
En
Proceedings of the 56th Annual Meeting of
the Association for Computational Linguis-
tics (Volumen 2: Artículos breves), pages 31–36,
Melbourne, Australia, Asociación de Computación-
lingüística nacional.

Hady ElSahar, Pavlos Vougiouklis, Arslen
Remaci, Christophe Gravier, Jonathon S. Hare,
Fr´ed´erique Laforest, and Elena Simperl. 2018.
T-REx: A large scale alignment of natural
language with knowledge base triples.
En
the Eleventh International
Actas de
Conference on Language Resources and
Evaluation, LREC 2018, Miyazaki, Japón, Puede
7-12, 2018.

information extraction. En Actas de la
2011 Conference on Empirical Methods in
Natural Language Processing, EMNLP 2011,
27-31 Julio 2011, John McIntyre Conference
Centre, Edimburgo, Reino Unido, A meeting of SIGDAT,
the ACL,
Interest Group of
a Special
pages 1535–1545.

Michael Gamon, Anthony Aue, and Martine
Smets. 2005. Sentence-level MT evaluation
without reference translations: Beyond lan-
guage modeling. In Proceedings of EAMT,
pages 103–111.

En procedimientos de

Marjan Ghazvininejad, Omer Levy, Yinhan Liu,
and Luke Zettlemoyer. 2019. Mask-predict:
conditional masked
Parallel decoding of
el
language models.
2019 Conference on Empirical Methods in
Natural Language Processing and the 9th
International Joint Conference on Natural
Idioma
(EMNLP-IJCNLP),
paginas
6114–6123, Hong Kong, Porcelana.
Asociación de Lingüística Computacional.

Procesando

Yoav Goldberg.

2019. Assessing BERT’s

syntactic abilities. CORR, abs/1901.05287v1.

Hiroaki Hayashi, Zecong Hu, Chenyan Xiong, y
Graham Neubig. 2020. Latent relation language
modelos. In Thirty-Fourth AAAI Conference on
Artificial Intelligence (AAAI), Nueva York, EE.UU.

John Hewitt and Christopher D. Manning. 2019.
A structural probe for finding syntax in word
representaciones. En Actas de la 2019
Conference of the North American Chapter of
la Asociación de Lingüística Computacional:
Tecnologías del lenguaje humano, NAACL-HLT
2019, Mineápolis, Minnesota, EE.UU, Junio 2-7,
2019, Volumen 1 (Artículos largos y cortos),
pages 4129–4138.

Cong Duy Vu Hoang, Gholamreza Haffari, y
Trevor Cohn. 2017. Towards decoding as
continuous optimisation in neural machine
traducción. En Actas de la 2017 Estafa-
in Natu-
ference on Empirical Methods
Procesamiento del lenguaje oral, pages 146–156,
Copenhague, Dinamarca. Asociación para Com-
Lingüística putacional.

Anthony Fader, Stephen Soderland, and Oren
Etzioni. 2011. Identifying relations for open

Robert L. Logan IV, Nelson F. Liu, Matthew E.
Peters, Matt Gardner, and Sameer Singh.

435

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
3
2
4
1
9
2
3
8
6
7

/

/
t

yo

a
C
_
a
_
0
0
3
2
4
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

the 57th Conference of

2019. Barack’s wife Hillary: Using knowledge
graphs for fact-aware language modeling. En
Actas de
el
Asociación de Lingüística Computacional,
LCA
28-
Agosto 2, 2019, Volumen 1: Artículos largos,
pages 5962–5971.

2019, Florencia,

Italia,

Julio

Ganesh Jawahar, Benoˆıt Sagot, and Djam´e
Seddah. 2019. What does BERT learn about
idioma? En procedimientos
the structure of
de
la Asociación
para Lingüística Computacional, LCA 2019,
Florencia, Italia, Julio 28- Agosto 2, 2019, Volumen
1: Artículos largos, pages 3651–3657.

the 57th Conference of

Diederik P. Kingma and Jimmy Ba. 2015. Adán:
A method for stochastic optimization. en 3ro
Conferencia Internacional sobre Aprendizaje Repre-
sentaciones, ICLR 2015, San Diego, California, EE.UU,
Puede 7-9, 2015, Conference Track Proceedings.

función

objetivo

Jiwei Li, Michel Galley, Chris Brockett, Jianfeng
gao, and Bill Dolan. 2016a. A diversity-
promoting
for neural
conversation models. In NAACL HLT 2016,
El 2016 Conference of the North American
Chapter of the Association for Computational
Lingüística: Tecnologías del lenguaje humano,
San Diego California, EE.UU, Junio 12-17, 2016,
pages 110–119.

Jiwei Li, Will Monroe, and Dan Jurafsky. 2016b.
Understanding neural networks through repre-
sentation erasure. CORR, abs/1612.08220v3.

Tal Linzen, Emmanuel Dupoux, and Yoav
Goldberg. 2016. Assessing the ability of LSTMs
to learn syntax-sensitive dependencies. Trans-
acciones de la Asociación de Computación
Lingüística, 4:521–535.

Jonathan Mallinson, Rico Sennrich, and Mirella
Lapata. 2017. Paraphrasing revisited with
neural machine translation. En procedimientos de
the 15th Conference of the European Chapter of
la Asociación de Lingüística Computacional:
Volumen 1, Artículos largos, pages 881–893,
Valencia, España. Asociación de Computación
Lingüística.

Bryan McCann, Nitish Shirish Keskar, Caiming
xiong, and Richard Socher. 2018. The natural
language decathlon: Multitask learning as
question answering. CORR, abs/1806.08730v1.

436

Oren Melamud, Jacob Goldberger, and Ido Dagan.
2016. context2vec: Learning generic context
embedding with bidirectional LSTM.
En
Proceedings of the 20th SIGNLL Conference
sobre el aprendizaje computacional del lenguaje natural,
CONLL 2016, Berlina, Alemania, Agosto 11-12,
2016, pages 51–61.

G´abor Melis, Chris Dyer, and Phil Blunsom.
2018. On the state of the art of evaluation
in neural language models. In 6th International
Conferencia sobre Representaciones del Aprendizaje, ICLR
2018, vancouver, BC, Canada, Abril 30 – Puede
3, 2018, Conference Track Proceedings.

Stephen Merity, Nitish Shirish Keskar, y
Richard Socher. 2018. Regularizing and opti-
In 6th
mizing LSTM language models.
International Conference on Learning Rep-
resentaciones,
ICLR 2018, vancouver, BC,
Canada, Abril 30 – Puede 3, 2018, Conferencia
Track Proceedings.

Tomas Mikolov and Geoffrey Zweig. 2012.
Context dependent recurrent neural network
En 2012 IEEE Spoken
modelo de lenguaje.
Idioma
(SLT),
pages 234–239. IEEE.

Technology Workshop

Nathan Ng, Kyra Yee, Alexei Baevski, Myle
Ott, Michael Auli, and Sergey Edunov. 2019.
Facebook FAIR’s WMT19 news translation
task submission. In Proceedings of the Fourth
Conferencia sobre traducción automática, WMT
2019, Florencia, Italia, Agosto 1-2, 2019 –
Volumen 2: Shared Task Papers, Day 1,
pages 314–319.

Matthew E. Peters, Mark Neumann, Mohit
Iyyer, Matt Gardner, Christopher Clark, Kenton
Sotavento, and Luke Zettlemoyer. 2018. Deep con-
textualized word representations. En curso-
cosas de
the North
American Chapter of the Association for Com-
Lingüística putacional: Human Language Tech-
nológico, NAACL-HLT 2018, Nueva Orleans,
Luisiana, EE.UU, Junio 1-6, 2018, Volumen 1
(Artículos largos), pages 2227–2237.

el 2018 Conference of

Matthew E. Peters, Mark Neumann, Roberto
logan, Roy Schwartz, Vidur Joshi, Sameer
singh, y Noé A.. Herrero. 2019. Knowl-
edge enhanced contextual word representations.
el 2019 Conferencia
En procedimientos de

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
3
2
4
1
9
2
3
8
6
7

/

/
t

yo

a
C
_
a
_
0
0
3
2
4
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

on Empirical Methods
in Natural Lan-
Procesamiento de idiomas y la IX Internacional
Conferencia conjunta sobre lenguaje natural Pro-
cesando (EMNLP-IJCNLP), pages 43–54, hong
kong, Porcelana. Asociación de Computación
Lingüística.

Fabio Petroni, Tim Rockt¨aschel, Sebastián Riedel,
Patrick Lewis, Anton Bakhtin, Yuxiang Wu,
and Alexander Miller. 2019. Modelos de lenguaje
as knowledge bases? En Actas de la
2019 Conference on Empirical Methods in
Natural Language Processing and the 9th
International Joint Conference on Natural
Idioma
(EMNLP-IJCNLP),
paginas
2463–2473, Hong Kong, Porcelana.
Asociación de Lingüística Computacional.

Procesando

Nina P¨orner, Ulli Waltinger,

and Hinrich
Sch¨utze. 2019. BERT is not a knowledge
base (todavía): Factual knowledge vs. Nombre-
based reasoning in unsupervised QA. CORR,
abs/1911.03681v1.

Alec Radford, Jeffrey Wu, niño rewon, David
Luan, Dario Amodei, and Ilya Sutskever. 2019.
Language models are unsupervised multitask
learners. OpenAI Blog, 1(8).

Nazneen Fatema Rajani, Bryan McCann,
Caiming Xiong, and Richard Socher. 2019.
Explain yourself! Leveraging language models
for commonsense reasoning. En procedimientos de
the 57th Annual Meeting of the Association for
Ligüística computacional, pages 4932–4942,
Florencia, Italia. Asociación de Computación
Lingüística.

Deepak Ravichandran and Eduard Hovy. 2002.
Learning surface text patterns for a ques-
En procedimientos de
tion answering system.
the 40th annual meeting on association for
computational
linguistics,
41–47.
Asociación de Lingüística Computacional.

paginas

Lorenza Romano, Milen Kouylekov,

Idan
Szpektor, Ido Dagan, and Alberto Lavelli.
Investigating a generic paraphrase-
2006.
based approach for
En
11th Conference of the European Chapter of
la Asociación de Lingüística Computacional,
trento, Italia. Asociación de Computación
Lingüística.

relation extraction.

437

Maarten Sap, Ronan Le Bras, Emily Allaway,
Chandra Bhagavatula, Nicholas Lourie, hanna
Celos, Brendan Roof, Noah A. Herrero,
and Yejin Choi. 2019. Atomic: An atlas of
machine commonsense for if-then reasoning.
the AAAI Conference
En procedimientos de
en
Artificial
33,
pages 3027–3035.

Inteligencia,

volumen

Rico Sennrich, Barry Haddow, and Alexandra
Birch. 2016.
Improving neural machine
translation models with monolingual data. En
Proceedings of the 54th Annual Meeting of
la Asociación de Lingüística Computacional,
LCA 2016, Agosto 7-12, 2016, Berlina, Alemania,
Volumen 1: Artículos largos.

Xing Shi, Inkit Padhi, and Kevin Knight. 2016.
Does string-based neural MT learn source
syntax? En Actas de la 2016 Conferencia
sobre métodos empíricos en lenguaje natural
Procesando, pages 1526–1534, austin, Texas.
Asociación de Lingüística Computacional.

Noah A. Herrero, Chris Dyer, Miguel Ballesteros,
Graham Neubig, Lingpeng Kong,
y
Adhiguna Kuncoro. 2017. What do recurrent
neural network grammars learn about syntax?
the 15th Conference of
En procedimientos de
el Capítulo Europeo de
la Asociación
para Lingüística Computacional, EACL 2017,
Valencia, España, Abril 3-7, 2017, Volumen 1:
Artículos largos, pages 1249–1258.

Ian Tenney, Dipanjan Das, and Ellie Pavlick.
2019a. BERT rediscovers the classical NLP
pipeline. In Proceedings of the 57th Conference
de
for Computational
Lingüística, LCA 2019, Florencia, Italia, Julio
28- Agosto 2, 2019, Volumen 1: Artículos largos,
pages 4593–4601.

Asociación

el

Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang,
Adam Poliak, R. Thomas McCoy, Najoung
kim, Benjamin Van Durme, Samuel R.
Bowman, Dipanjan Das, and Ellie Pavlick.
2019b. What do you learn from context?
Probing for sentence structure in contextualized
In 7th International
word representations.
Conferencia sobre Representaciones del Aprendizaje, ICLR
2019, Nueva Orleans, LA, EE.UU, Puede 6-9, 2019.

Kristina Toutanova, Danqi Chen, Patricio Pantel,
Hoifung Poon, Pallavi Choudhury, y miguel

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
3
2
4
1
9
2
3
8
6
7

/

/
t

yo

a
C
_
a
_
0
0
3
2
4
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

para

articulación
Gamon. 2015. Representing text
text and knowledge bases.
embedding of
el 2015 Conferencia sobre
En procedimientos de
Empirical Methods
in Natural Language
Procesando, EMNLP 2015, Lisbon, Portugal,
Septiembre 17-21, 2015, pages 1499–1509.

Trieu H. Trinh and Quoc V. Le. 2018. A simple
method for commonsense reasoning. CORR,
abs/1806.02847v2.

Eric Wallace, Shi Feng, Nikhil Kandpal, Mate
jardinero, and Sameer Singh. 2019. Universal
adversarial triggers for attacking and analyzing
NLP. En Actas de la 2019 Conferencia
sobre métodos empíricos en lenguaje natural
Procesamiento y IX Conjunción Internacional
Conferencia sobre procesamiento del lenguaje natural
(EMNLP-IJCNLP), pages 2153–2162, hong
kong, Porcelana. Asociación de Computación
Lingüística.

Zichao Yang, Phil Blunsom, Chris Dyer, y
Wang Ling. 2017. Reference-aware language
modelos. En procedimientos de
el 2017 Estafa-
ference on Empirical Methods in Natural
Procesamiento del lenguaje, EMNLP 2017, Copen-
hagen, Dinamarca, Septiembre 9-11, 2017,
pages 1850–1859.

Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin
Jiang, Maosong Sun, and Qun Liu. 2019.
ERNIE: Enhanced language representation
En procedimientos
with informative entities.
la Asociación
the 57th Conference of
de
para Lingüística Computacional, LCA 2019,
Florencia, Italia, Julio 28- Agosto 2, 2019, Volumen
1: Artículos largos, pages 1441–1451.

Geoffrey Zweig and Christopher J. C. Burges.
2011. The Microsoft Research sentence
completion challenge. Microsoft Research,
Redmond, Washington, EE.UU, Technical Report MSR-
TR-2011-129.

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
3
2
4
1
9
2
3
8
6
7

/

/
t

yo

a
C
_
a
_
0
0
3
2
4
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

438
Descargar PDF