How Can We Know What Language Models Know?

How Can We Know What Language Models Know?

Zhengbao Jiang1∗ Frank F. Xu1∗

Jun Araki2 Graham Neubig1

1Language Technologies Institute, Carnegie Mellon University
2Bosch Research North America

{zhengbaj,fangzhex,gneubig}@cs.cmu.edu

jun.araki@us.bosch.com

Abstrait

Recent work has presented intriguing results
examining the knowledge contained in lan-
guage models (LMs) by having the LM fill
in the blanks of prompts such as ‘‘Obama
is a
by profession’’. These prompts are
usually manually created, and quite possibly
sub-optimal; another prompt such as ‘‘Obama
worked as a
’’ may result in more accurately
predicting the correct profession. À cause de
ce, given an inappropriate prompt, we might
fail to retrieve facts that the LM does know,
and thus any given prompt only provides a
lower bound estimate of the knowledge con-
tained in an LM. In this paper, we attempt to
more accurately estimate the knowledge con-
tained in LMs by automatically discovering
better prompts to use in this querying process.
Spécifiquement, we propose mining-based and
paraphrasing-based methods to automatically
generate high-quality and diverse prompts,
as well as ensemble methods to combine
answers from different prompts. Extensive
experiments on the LAMA benchmark for
extracting relational knowledge from LMs
demonstrate that our methods can improve
accuracy from 31.1% à 39.6%, providing
a tighter lower bound on what LMs know.
We have released the code and the resulting
LM Prompt And Query Archive (LPAQA) à
https://github.com/jzbjyb/LPAQA.

1 Introduction

Recent years have seen the primary role of lan-
guage models (LMs) transition from generating
or evaluating the fluency of natural text (Mikolov
and Zweig, 2012; Merity et al., 2018; Melis et al.,
2018; Gamon et al., 2005) to being a powerful
tool for text understanding. This understanding has
mainly been achieved through the use of language
modeling as a pre-training task for feature extrac-
tors, where the hidden vectors learned through a
language modeling objective are then used in

∗ The first two authors contributed equally.

423

down-stream language understanding systems
(Dai and Le, 2015; Melamud et al., 2016; Peters
et coll., 2018; Devlin et al., 2019).

Fait intéressant, it is also becoming apparent that
LMs1 themselves can be used as a tool for text
understanding by formulating queries in natural
language and either generating textual answers
directly (McCann et al., 2018; Radford et al.,
2019), or assessing multiple choices and picking
the most likely one (Zweig and Burges, 2011;
Rajani et al., 2019). Par exemple, LMs have been
used to answer factoid questions (Radford et al.,
2019), answer common sense queries (Trinh and
Le, 2018; Sap et al., 2019), or extract factual
knowledge about
relations between entities
(Petroni et al., 2019; Baldini Soares et al.,
2019). Regardless of the end task, the knowledge
contained in LMs is probed by providing a prompt,
and letting the LM either generate the continuation
of a prefix (par exemple., ‘‘Barack Obama was born in ’’),
or predict missing words in a cloze-style template
(par exemple., ‘‘Barack Obama is a

by profession’’).

Cependant, while this paradigm has been used to
achieve a number of intriguing results regarding
the knowledge expressed by LMs, they usually
rely on prompts that were manually created
based on the intuition of the experimenter. These
manually created prompts (par exemple., ‘‘Barack Obama
was born in
’’) might be sub-optimal because
LMs might have learned target knowledge from
substantially different contexts (par exemple., ‘‘The birth
place of Barack Obama is Honolulu, Hawaii.’’)
during their training. Thus it is quite possible that
a fact that the LM does know cannot be retrieved
due to the prompts not being effective queries
for the fact. Ainsi, existing results are simply a
lower bound on the extent of knowledge contained

1Some models we use in this paper, par exemple., BERT (Devlin
et coll., 2019), are bi-directional, and do not directly define
probability distribution over text, which is the underlying
definition of an LM. Néanmoins, we call them LMs for
simplicity.

Transactions of the Association for Computational Linguistics, vol. 8, pp. 423–438, 2020. https://doi.org/10.1162/tacl a 00324
Action Editor: Timothy Baldwin. Submission batch: 12/2019; Revision batch: 3/2020; Published 7/2020.
c(cid:13) 2020 Association for Computational Linguistics. Distributed under a CC-BY 4.0 Licence.

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

e
d
toi

/
t

un
c
je
/

je

un
r
t
je
c
e

p
d

F
/

d
o

je
/

.

1
0
1
1
6
2

/
t

je

un
c
_
un
_
0
0
3
2
4
1
9
2
3
8
6
7

/

/
t

je

un
c
_
un
_
0
0
3
2
4
p
d

.

F

b
oui
g
toi
e
s
t

t

o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3

BERT-large as well. We further demonstrate that
using a diversity of prompts through ensembling
further improves accuracy to 39.6%. We perform
extensive analysis and ablations, gleaning insights
both about how to best query the knowledge
stored in LMs and about potential directions for
incorporating knowledge into LMs themselves.
Enfin, we have released the resulting LM Prompt
And Query Archive (LPAQA) to facilitate future
experiments on probing knowledge contained in
LMs.

2 Knowledge Retrieval from LMs

Retrieving factual knowledge from LMs is quite
different
from querying standard declarative
knowledge bases (KBs). In standard KBs, users
formulate their information needs as a structured
query defined by the KB schema and query
langue. Par exemple, SELECT ?y WHERE
{wd:Q76 wdt:P19 ?oui} is a SPARQL query
to search the birth place of Barack Obama. Dans
contraste, LMs must be queried by natural language
prompts, such as ‘‘Barack Obama was born in ’’,
and the word assigned the highest probability in
the blank will be returned as the answer. Unlike
deterministic queries on KBs, this provides no
guarantees of correctness or success.

While the idea of prompts is common to
methods for extracting many varieties of knowl-
edge from LMs, in this paper we specifically
follow the formulation of Petroni et al. (2019),
where factual knowledge is in the form of triples
hx, r, yi. Here x indicates the subject, y indicates
the object, and r is their corresponding relation.
To query the LM, r is associated with a cloze-style
prompt tr consisting of a sequence of tokens, deux
of which are placeholders for subjects and objects
(par exemple., ‘‘x plays at y position’’). The existence of
the fact in the LM is assessed by replacing x with
the surface form of the subject, and letting the
model predict the missing object (par exemple., ‘‘LeBron
position’’):2
James plays at

ˆy = arg max
y′∈V

PLM(y′|X, tr),

2We can also go the other way around by filling in the
objects and predicting the missing subjects. Since our focus
is on improving prompts, we choose to be consistent with
Petroni et al. (2019) to make a fair comparison, and leave
exploring other settings to future work. Also notably, Petroni
et autres. (2019) only use objects consisting of a single token, donc
we only need to predict one word for the missing slot.

Chiffre 1: Top-5 predictions and their log probabilities
using different prompts (manual, mined, and para-
phrased) to query BERT. Correct answer is underlined.

in LMs, and in fact, LMs may be even more
knowledgeable than these initial results indicate.
In this paper we ask the question: ‘‘How can
we tighten this lower bound and get a more
accurate estimate of the knowledge contained in
state-of-the-art LMs?’’ This is interesting both
scientifically, as a probe of the knowledge that
LMs contain, and from an engineering perspective,
as it will result in higher recall when using LMs
as part of a knowledge extraction system.

En particulier, we focus on the setting of Petroni
et autres. (2019) who examine extracting knowledge
regarding the relations between entities (defini-
tions in § 2). We propose two automatic methods
to systematically improve the breadth and quality
of the prompts used to query the existence of a
relation (§ 3). Spécifiquement, as shown in Figure 1,
these are mining-based methods inspired by pre-
vious relation extraction methods (Ravichandran
and Hovy, 2002), and paraphrasing-based meth-
ods that take a seed prompt (either manually
created or automatically mined), and paraphrase
it into several other semantically similar expres-
sions. Plus loin, because different prompts may
work better when querying for different subject-
object pairs, we also investigate lightweight
ensemble methods to combine the answers from
different prompts together (§ 4).

We experiment on the LAMA benchmark
(Petroniet al., 2019), which is an English-language
benchmark devised to test the ability of LMs to
retrieve relations between entities (§ 5). We first
demonstrate that improved prompts significantly
improve accuracy on this task, with the one-best
prompt extracted by our method raising accuracy
depuis 31.1% à 34.1% on BERT-base (Devlin et al.,
2019), with similar gains being obtained with

424

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

e
d
toi

/
t

un
c
je
/

je

un
r
t
je
c
e

p
d

F
/

d
o

je
/

.

1
0
1
1
6
2

/
t

je

un
c
_
un
_
0
0
3
2
4
1
9
2
3
8
6
7

/

/
t

je

un
c
_
un
_
0
0
3
2
4
p
d

.

F

b
oui
g
toi
e
s
t

t

o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3

where V is the vocabulary, and PLM(y′|X, tr)
is the LM probability of predicting y′
dans le
blank conditioned on the other tokens (c'est à dire., le
subject and the prompt).3 We say that an LM has
knowledge of a fact if ˆy is the same as the ground-
truth y. Because we would like our prompts to
most effectively elicit any knowledge contained
in the LM itself, a ‘‘good’’ prompt should trigger
the LM to predict the ground-truth objects as often
as possible.

In previous work (McCann et al., 2018; Radford
et coll., 2019; Petroni et al., 2019), tr has been
a single manually defined prompt based on the
intuition of the experimenter. As noted in the
introduction, this method has no guarantee of
being optimal, and thus we propose methods that
learn effective prompts from a small set of training
data consisting of gold subject-object pairs for
each relation.

3 Prompt Generation

D'abord, we tackle prompt generation: the task of gen-
erating a set of prompts {tr,je}T
i=1 for each relation
r, where at least some of the prompts effectively
trigger LMs to predict ground-truth objects. Nous
employ two practical methods to either mine
prompt candidates from a large corpus (§ 3.1)
or diversify a seed prompt through paraphrasing
(§ 3.2).

3.1 Mining-based Generation

first method is

Notre
inspired by template-
based relation extraction methods (Agichtein and
Gravano, 2000; Ravichandran and Hovy, 2002),
which are based on the observation that words
in the vicinity of the subject x and object y in a
large corpus often describe the relation r. Based
on this intuition, we first identify all the Wikipedia
sentences that contain both subjects and objects
of a specific relation r using the assumption of
distant supervision, then propose two methods to
extract prompts.

Middle-word Prompts Following the observa-
tion that words in the middle of the subject and
object are often indicative of the relation, nous

3We restrict to masked LMs in this paper because the
missing slot might not be the last token in the sentence and
computing this probability in traditional left-to-right LMs
using Bayes’ theorem is not tractable.

directly use those words as prompts. For exam-
ple, ‘‘Barack Obama was born in Hawaii’’ is
converted into a prompt ‘‘x was born in y’’
by replacing the subject and the object with
placeholders.

Dependency-based Prompts Toutanova et al.
(2015) note that in cases of templates where words
do not appear in the middle (par exemple., ‘‘The capital of
France is Paris’’), templates based on syntactic
analysis of the sentence can be more effective for
relation extraction. We follow this insight in our
second strategy for prompt creation, which parses
sentences with a dependency parser to identify
the shortest dependency path between the subject
and object, then uses the phrase spanning from
the leftmost word to the rightmost word in the
dependency path as a prompt. Par exemple, le
dependency path in the above example is ‘‘France
nsubj
pobj
←−− is attr
−−→ Paris’’, où
←−− of
the leftmost and rightmost words are ‘‘capital’’
and ‘‘Paris’’, giving a prompt of ‘‘capital of x is
y’’.

prep
←−− capital

Notably, these mining-based methods do not
rely on any manually created prompts, and can
thus be flexibly applied to any relation where we
can obtain a set of subject-object pairs. This will
result in diverse prompts, covering a wide variety
of ways that the relation may be expressed in text.
Cependant, it may also be prone to noise, as many
prompts acquired in this way may not be very
indicative of the relation (par exemple., ‘‘x, y’’), even if
they are frequent.

3.2 Paraphrasing-based Generation

Our second method for generating prompts is more
targeted—it aims to improve lexical diversity
while remaining relatively faithful to the original
prompt. Spécifiquement, we do so by performing
paraphrasing over the original prompt into other
semantically similar or identical expressions. Pour
example, if our original prompt is ‘‘x shares a
border with y’’, it may be paraphrased into ‘‘x
has a common border with y’’ and ‘‘x adjoins y’’.
This is conceptually similar to query expansion
techniques used in information retrieval that refor-
mulate a given query to improve retrieval perfor-
mance (Carpineto and Romano, 2012).

Although many methods could be used for
paraphrasing (Romano et al., 2006; Bhagat and
Ravichandran, 2008), we follow the simple

425

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

e
d
toi

/
t

un
c
je
/

je

un
r
t
je
c
e

p
d

F
/

d
o

je
/

.

1
0
1
1
6
2

/
t

je

un
c
_
un
_
0
0
3
2
4
1
9
2
3
8
6
7

/

/
t

je

un
c
_
un
_
0
0
3
2
4
p
d

.

F

b
oui
g
toi
e
s
t

t

o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3

method of using back-translation (Sennrich et al.,
2016; Mallinson et al., 2017) to first translate
the initial prompt into B candidates in another
langue, each of which is then back-translated
into B candidates in the original language. Nous
then rank B2 candidates based on their round-
trip probability (c'est à dire., Pforward(¯t|ˆt ) · Pbackward(t|¯t ),
where ˆt is the initial prompt, ¯t is the translated
prompt in the other language, and t is the final
prompt), and keep the top T prompts.

4 Prompt Selection and Ensembling

Dans la section précédente, we described methods to
generate a set of candidate prompts {tr,je}T
i = 1 pour
a particular relation r. Each of these prompts may
be more or less effective at eliciting knowledge
from the LM, and thus it is necessary to decide
how to use these generated prompts at test time. Dans
this section, we describe three methods to do so.

probabilities4 from the top K prompts to calculate
the probability of the object:

s(oui|X, r) =

K

X
je = 1

1
K

log PLM(oui|X, tr,je),

P. (oui|X, r) = softmax(s(·|X, r))oui,

(1)

(2)

where tr,i is the prompt ranked at the i-th position.
Ici, K is a hyper-parameter, where a small K
focuses on the few most accurate prompts, and a
large K increases diversity of the prompts.

4.3 Optimized Ensemble

The above method treats the top K prompts
equally, which is sub-optimal given some prompts
are more reliable than others. Ainsi, we also
propose a method that directly optimizes prompt
weights. Officiellement, we re-define the score in
Équation 1 comme:

T

4.1 Top-1 Prompt Selection

s(oui|X, r) =

Pθr(tr,je|r) log PLM(oui|X, tr,je),

For each prompt, we can measure its accuracy of
predicting the ground-truth objects (on a training
dataset) en utilisant:

UN(tr,je) =

Phx,yi∈R δ(y=arg maxy′ PLM(y′|X,tr,je))
|R.|

,

where R is a set of subject-object pairs with
relation r, et δ(·) is Kronecker’s delta function,
returning 1 if the internal condition is true and
0 otherwise. In the simplest method for querying
the LM, we choose the prompt with the highest
accuracy and query using only this prompt.

4.2 Rank-based Ensemble

Next we examine methods that use not only
the top-1 prompt, but combine together multiple
prompts. The advantage to this is that the LM may
have observed different entity pairs in different
contexts within its training data, and having
a variety of prompts may allow for elicitation
of knowledge that appeared in these different
contexts.

Our first method for ensembling is a parameter-
free method that averages the predictions of the
the prompts
top-ranked prompts. We rank all
based on their accuracy of predicting the objects
on the training set, and use the average log

X
je = 1

(3)
where Pθr (tr,je|r) = softmax(θr) is a distribution
over prompts parameterized by θr, a T -sized real-
value vector. For every relation, we learn to score
a different set of T candidate prompts, so the
total number of parameters is T times the number
of relations. The parameter θr is optimized to
maximize the probability of the gold-standard
objects P (oui|X, r) over training data.

5 Main Experiments

5.1 Experimental Settings

Dans cette section, we assess the extent to which our
prompts can improve fact prediction performance,
raising the lower bound on the knowledge we
discern is contained in LMs.

Dataset As data, we use the T-REx subset
(ElSahar et al., 2018) of the LAMA benchmark
(Petroni et al., 2019), which has a broader set
de 41 relations (compared with the Google-RE
subset, which only covers 3). Each relation is
associated with at most 1000 subject-object pairs
from Wikidata, and a single manually designed

4Intuitively, because we are combining together scores in
the log space, this has the effect of penalizing objects that are
very unlikely given any certain prompt in the collection. Nous
also compare with linear combination in ablations in § 5.3.

426

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

e
d
toi

/
t

un
c
je
/

je

un
r
t
je
c
e

p
d

F
/

d
o

je
/

.

1
0
1
1
6
2

/
t

je

un
c
_
un
_
0
0
3
2
4
1
9
2
3
8
6
7

/

/
t

je

un
c
_
un
_
0
0
3
2
4
p
d

.

F

b
oui
g
toi
e
s
t

t

o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3

prompt. To learn to mine prompts (§ 3.1), rank
prompts (§ 4.2), or learn ensemble weights (§ 4.3),
we create a separate training set of subject-object
pairs also from Wikidata for each relation that has
no overlap with the T-REx dataset. We denote the
training set as T-REx-train. For consistency with
the T-REx dataset in LAMA, T-REx-train also is
chosen to contain only single-token objects. À
investigate the generality of our method, we also
report the performance of our methods on the
Google-RE subset,5 which takes a similar form
to T-REx but is relatively small and only covers
three relations.

P¨orner et al. (2019) note that some facts in
LAMA can be recalled solely based on surface
forms of entities, without memorizing facts. Ils
filter out those easy-to-guess facts and create a
more difficult benchmark, denoted as LAMA-
UHN. We also conduct experiments on the T-REx
subset of LAMA-UHN (c'est à dire., T-REx-UHN) à
investigate whether our methods can still obtain
improvements on this harder benchmark. Dataset
statistics are summarized in Table 1.

Models As for the models to probe, in our main
experiments we use the standard BERT-base and
BERT-large models (Devlin et al., 2019). Nous
also perform some experiments with other pre-
trained models enhanced with external entity
representations, namely, ERNIE (Zhang et al.,
2019) and KnowBert (Peters et al., 2019), lequel
we believe may do better on recall of entities.

Evaluation Metrics We use two metrics to
evaluate the success of prompts in probing
LMs. The first evaluation metric, micro-averaged
accuracy, follows the LAMA benchmark6 in
calculating the accuracy of all subject-object pairs
for relation r:

1
|R.| X
hx,yi∈R

d(ˆy = y),

where ˆy is the prediction and y is the ground
truth. Then we average across all
relations.
Cependant, we found that the object distributions

5https://code.google.com/archive/p/

relation-extraction-corpus/.

6In LAMA, it is called ‘‘P@1.’’ There might be multiple
correct answers for some cases, par exemple., a person speaking
multiple languages, but we only use one ground truth. Nous
will leave exploring more advanced evaluation methods to
future work.

Properties

T-REx T-REx-UHN T-REx-train

#sub-obj pairs
#unique subject
#unique objects
object entropy

830.2
767.8
150.9
3.6

661.1
600.8
120.5
3.4

948.7
880.1
354.6
4.4

Tableau 1: Dataset statistics. All
averaged across 41 relations.

the values are

quelques

relations

extremely

skewed
sont
de
(par exemple., more than half of the objects in relation
native language are French). This can lead
to deceptively high scores, even for a majority-
class baseline that picks the most common object
for each relation, which achieves a score of 22.0%.
To mitigate this problem, we also report macro-
averaged accuracy, which computes accuracy for
each unique object separately, then averages them
together to get the relation-level accuracy:

1

|uni obj(R.)| X

y′∈uni obj(R.)

Phx,yi∈R,y = y′ δ(ˆy = y)
|{oui|hx, yi ∈ R, y = y′}|

,

where uni obj(R.) returns a set of unique objects
from relation r. This is a much stricter metric,
with the majority-class baseline only achieving a
score of 2.2%.

Methods We attempted different methods for
prompt generation and selection/ensembling, et
compare them with the manually designed
prompts used in Petroni et al. (2019). Majority
refers to predicting the majority object for each
relation, as mentioned above. Man is the baseline
from Petroni et al. (2019) that only uses the
manually designed prompts for retrieval. Mine
(§ 3.1) uses the prompts mined from Wikipedia
through both middle words and dependency paths,
and Mine+Man combines them with the manual
prompts. Mine+Para (§ 3.2) paraphrases the
highest-ranked mined prompt for each relation,
while Man+Para uses the manual one instead.

The prompts are combined either by averaging
the log probabilities from the TopK highest-
ranked prompts (§ 4.2) or the weights after
optimization (§ 4.3; Opti.). Oracle represents the
upper bound of the performance of the generated
prompts, where a fact is judged as correct if any
one of the prompts allows the LM to successfully
predict the object.

Implementation Details We use T = 40 most
frequent prompts either generated through mining

427

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

e
d
toi

/
t

un
c
je
/

je

un
r
t
je
c
e

p
d

F
/

d
o

je
/

.

1
0
1
1
6
2

/
t

je

un
c
_
un
_
0
0
3
2
4
1
9
2
3
8
6
7

/

/
t

je

un
c
_
un
_
0
0
3
2
4
p
d

.

F

b
oui
g
toi
e
s
t

t

o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3

or paraphrasing in all experiments, and the number
of candidates in back-translation is set to B = 7.
We remove prompts only containing stopwords/
punctuations or longer than 10 words to reduce
bruit. We use the round-trip English-German
neural machine translation models pre-trained on
WMT’19 (Ng et al., 2019) for back-translation,
as English-German is one of the most highly
resourced language pairs.7 When optimizing
ensemble parameters, we use Adam (Kingma and
Ba, 2015) with default parameters and batch size
de 32.

Prompts

Mine
Mine+Man
Mine+Para
Man+Para

Mine
Mine+Man
Mine+Para
Man+Para

34.2
35.9
34.0
35.8

38.9
39.6
36.2
37.3

34.7
35.1
34.5
36.6

Top1 Top3 Top5 Opti. Oracle
BERT-base (Man=31.1)
31.4
31.6
32.7
34.1
BERT-large (Man=32.3)
37.0
39.4
37.8
35.9

54.4
56.1
51.8
50.0

43.7
43.9
40.1
38.8

37.0
40.6
38.6
37.3

36.4
38.4
38.6
38.0

50.7
52.6
48.1
47.9

5.2 Evaluation Results

Micro- and macro-averaged accuracy of different
methods are reported in Tables 2 et 3, respecter-
tivement.

Single Prompt Experiments When only one
prompt is used (in the first Top1 column in both
tables), the best of the proposed prompt genera-
tion methods increases micro-averaged accuracy
depuis 31.1% à 34.1% on BERT-base, et de
32.3% à 39.4% on BERT-large. This demon-
strates that
the manually created prompts are
a somewhat weak lower bound; there are other
prompts that further improve the ability to query
knowledge from LMs. Tableau 4 shows some
of the mined prompts that resulted in a large
performance gain compared with the manual ones.
For the relation religion, ‘‘x who converted
to y’’ improved 60.0% over the manually defined
prompt of ‘‘x is affiliated with the y religion’’,
and for the relation subclass of, ‘‘x is a type
of y’’ raised the accuracy by 22.7% over ‘‘x is
a subclass of y’’. It can be seen that the largest
gains from using mined prompts seem to occur
in cases where the manually defined prompt is
more complicated syntactically (par exemple., the former),
or when it uses less common wording (par exemple., le
latter) than the mined prompt.

Prompt Ensembling Next we turn to experi-
ments that use multiple prompts to query the LM.
Comparing the single-prompt results in column 1
to the ensembled results in the following three
columns, we can see that ensembling multiple
prompts almost always leads to better perform-
ance. The simple average used in Top3 and

7https://github.com/pytorch/fairseq/tree/

master/examples/wmt19.

Tableau 2: Micro-averaged accuracy of different
méthodes (%). Majority gives us 22.0%. Italic
indicates best single-prompt accuracy, and bold
indicates the best non-oracle accuracy overall.

Prompts

Mine
Mine+Man
Mine+Para
Man+Para

Mine
Mine+Man
Mine+Para
Man+Para

23.9
24.8
23.0
24.6

25.7
26.6
23.6
25.0

22.7
23.8
22.4
23.8

Top1 Top3 Top5 Opti. Oracle
BERT-base (Man=22.8)
20.7
21.3
21.2
22.8
BERT-large (Man=25.7)
26.4
28.1
26.2
25.9

36.2
38.0
34.1
34.9

40.7
42.2
38.3
39.3

26.3
28.3
27.1
27.8

30.1
30.7
27.1
28.0

25.9
27.3
27.0
28.3

Tableau 3: Macro-averaged accuracy of different
méthodes (%). Majority gives us 2.2%. Italic
indicates best single-prompt accuracy, and bold
indicates the best non-oracle accuracy overall.

Top5 outperforms Top1 across different prompt
generation methods. The optimized ensemble fur-
ther raises micro-averaged accuracy to 38.9% et
43.7% on BERT-base and BERT-large respec-
tivement, outperforming the rank-based ensemble by
a large margin. These two sets of results demon-
strate that diverse prompts can indeed query the
LM in different ways, and that the optimization-
based method is able to find weights that
effectively combine different prompts together.

We list the learned weights of top-3 mined
prompts and accuracy gain over only using
the top-1 prompt in Table 5. Weights tend to
concentrate on one particular prompt, and the other
prompts serve as complements. We also depict the
performance of the rank-based ensemble method

428

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

e
d
toi

/
t

un
c
je
/

je

un
r
t
je
c
e

p
d

F
/

d
o

je
/

.

1
0
1
1
6
2

/
t

je

un
c
_
un
_
0
0
3
2
4
1
9
2
3
8
6
7

/

/
t

je

un
c
_
un
_
0
0
3
2
4
p
d

.

F

b
oui
g
toi
e
s
t

t

o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3

ID

Relations

Manual Prompts

Mined Prompts

Acc. Gain

place of death

x is affiliated with the y religion x who converted to y

P140 religion
P159 headquarters location The headquarter of x is in y
P20
P264 record label
P279 subclass of
P39

x died in y
x is represented by music label y x recorded for y
x is a type of y
x is a subclass of y
x is elected y
x has the position of y

x is based in y
x died at his home in y

position held

+60.0
+4.9
+4.6
+17.2
+22.7
+7.9

Tableau 4: Micro-averaged accuracy gain (%) of the mined prompts over the manual prompts.

ID

Relations

Prompts and Weights

owned by
religion

P127
P140
P176 manufacturer

x is owned by y .485 x was acquired by y .151 x division of y .151
x who converted to y .615 y tirthankara x .190 y dedicated to x .110
y introduced the x .594 y announced the x .286 x attributed to the y .111

Acc. Gain

+7.0
+12.2
+7.0

Tableau 5: Weights of top-3 mined prompts, and the micro-averaged accuracy gain (%) over using the
top-1 prompt.

with respect to the number of prompts in Figure 2.
For mined prompts, top-2 or top-3 usually gives
us the best results, while for paraphrased prompts,
top-5 is the best. Incorporating more prompts does
not always improve accuracy, a finding consistent
with the rapidly decreasing weights learned by
the optimization-based method. The gap between
Oracle and Opti. indicates that there is still space
for improvement using better ensemble methods.

Mining vs. Paraphrasing For the rank-based
ensembles (Top1, 3, 5), prompts generated
by paraphrasing usually perform better
que
mined prompts, while for the optimization-based
ensemble (Opti.), mined prompts perform better.
We conjecture this is because mined prompts
exhibit more variation compared to paraphrases,
and proper weighting is of central importance.
This difference in the variation can be observed in
the average edit distance between the prompts
of each class, which is 3.27 et 2.73 pour
mined and paraphrased prompts respectively.
Cependant, the improvement led by ensembling
paraphrases is still significant over just using
one prompt (Top1 vs. Opti.), raising micro-
averaged accuracy from 32.7% à 36.2% sur
BERT-base, et de 37.8% à 40.1% on BERT-
grand. This indicates that even small modifications
to prompts can result in relatively large changes in
prédictions. Tableau 6 demonstrates cases where
modification of one word (either function or
to significant accuracy
content word)

leads

429

Chiffre 2: Performance for different top-K ensembles.

ID

Modifications

Acc. Gain

P413
P495
P495
P361
P413

x plays in→at y position
x was created→made in y
x was→is created in y
x is a part of y
x plays in y position

+23.2
+10.8
+10.0
+2.7
+2.2

Tableau 6: Small modifications (update, insert,
and delete) in paraphrase lead to large accuracy
gain (%).

improvements, indicating that large-scale LMs
are still brittle to small changes in the ways they
are queried.

Middle-word vs. Dependency-based We com-
pare the performance of only using middle-
word prompts and concatenating them with
dependency-based prompts in Table 7. Le

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

e
d
toi

/
t

un
c
je
/

je

un
r
t
je
c
e

p
d

F
/

d
o

je
/

.

1
0
1
1
6
2

/
t

je

un
c
_
un
_
0
0
3
2
4
1
9
2
3
8
6
7

/

/
t

je

un
c
_
un
_
0
0
3
2
4
p
d

.

F

b
oui
g
toi
e
s
t

t

o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3

Prompts Top1 Top3 Top5 Opti. Oracle

Mid
Mid+Dep

30.7
31.4

32.7
34.2

31.2
34.7

36.9
38.9

45.1
50.7

Tableau 7: Ablation study of middle-word and
dependency-based prompts on BERT-base.

Model

Man Mine

Mine Mine Man
+Man +Para +Para

BERT
ERNIE
KnowBert

31.1
32.1
26.2

38.9
42.3
34.1

39.6
43.8
34.6

36.2
40.1
31.9

37.3
41.1
32.1

Tableau 8: Micro-averaged accuracy (%) of various
LMs

improvements confirm our intuition that words
belonging to the dependency path but not in the
middle of the subject and object are also indicative
of the relation.

Micro vs. Macro Comparing Tables 2 et
3, we can see that macro-averaged accuracy is
than micro-averaged accuracy,
much lower
indicating that macro-averaged accuracy is a
more challenging metric that evaluates how many
unique objects LMs know. Our optimization-
based method improves macro-averaged accuracy
depuis 22.8% à 25.7% on BERT-base, et
depuis 25.7% à 30.1% on BERT-base. Ce
again confirms the effectiveness of ensembling
multiple prompts, but the gains are somewhat
smaller. Notably,
in our optimization-based
méthodes,
the ensemble weights are optimized
on each example in the training set, which is
more conducive to optimizing micro-averaged
accuracy. Optimization to improve macro-
averaged accuracy is potentially an interesting
dans
direction for future work that may result
prompts more generally applicable to different
types of objects.

Performance of Different LMs
In Table 8,
we compare BERT with ERNIE and KnowBert,
which are enhanced with external knowledge
by explicitly incorporating entity embeddings.
ERNIE outperforms BERT by 1 point even
with the manually defined prompts, but our
prompt generation methods further emphasize
the difference between the two methods, avec
the highest accuracy numbers differing by 4.2
points using the Mine+Man method. Ce

Model

Man Mine

Mine Mine Man
+Man +Para +Para

BERT-base
21.3
BERT-large 24.2

28.7
34.5

29.4
34.5

26.8
31.6

27.0
29.8

Tableau 9: Micro-averaged accuracy (%) sur
LAMA-UHN.

Model

Man Mine

Mine Mine Man
+Man +Para +Para

BERT-base
9.8
BERT-large 10.5

10.0
10.6

10.4
11.3

9.6
10.4

10.0
10.7

Tableau 10: Micro-averaged accuracy (%) sur
Google-RE.

indicates that
if LMs are queried effectively,
the differences between highly performant
models may become more clear. KnowBert
underperforms BERT on LAMA, lequel
is opposite to the observation made in Peters et al.
(2019). This is probably because that multi token
subjects/objects are used to evaluate KnowBert in
Peters et al. (2019), while LAMA contains only
single-token objects.

LAMA-UHN Evaluation The performances
on LAMA-UHN benchmark are reported in
Tableau 9. Although the overall performances drop
dramatically compared to the performances on the
original LAMA benchmark (Tableau 2), optimized
ensembles can still outperform manual prompts
by a large margin, indicating that our methods are
effective in retrieving knowledge that cannot be
inferred based on surface forms.

5.3 Analysis

Suivant, we perform further analysis to better
understand what type of prompts proved most
suitable for facilitating retrieval of knowledge
from LMs.

Prediction Consistency by Prompt We first
analyze the conditions under which prompts
will yield different predictions. We define the
divergence between predictions of two prompts
tr,i and tr,j using the following equation:

Div(tr,je, tr,j) =

Phx,yi∈R δ(C(X, oui, tr,je) 6= C(X, oui, tr,j ))
|R.|

,

where C(X, oui, tr,je) = 1 if prompt tr,i can
successfully predict y and 0 otherwise, et δ(·) est

430

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

e
d
toi

/
t

un
c
je
/

je

un
r
t
je
c
e

p
d

F
/

d
o

je
/

.

1
0
1
1
6
2

/
t

je

un
c
_
un
_
0
0
3
2
4
1
9
2
3
8
6
7

/

/
t

je

un
c
_
un
_
0
0
3
2
4
p
d

.

F

b
oui
g
toi
e
s
t

t

o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

e
d
toi

/
t

un
c
je
/

je

un
r
t
je
c
e

p
d

F
/

d
o

je
/

.

1
0
1
1
6
2

/
t

je

un
c
_
un
_
0
0
3
2
4
1
9
2
3
8
6
7

/

/
t

je

un
c
_
un
_
0
0
3
2
4
p
d

.

F

b
oui
g
toi
e
s
t

t

o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3

Chiffre 3: Correlation of edit distance between prompts
and their prediction divergence.

Chiffre 4: Ranking position distribution of prompts with
different patterns. Lower is better.

x/y V y/x

| x/y V P y/x

| x/y V W* P y/x

V = verb particle? adv?
W = (noun | adj | adv | pron | det)
P = (prep | particle | inf. marker)

Tableau 11: Three part-of-speech-based regular
expressions used in ReVerb to identify relational
phrases.

Kronecker’s delta. For each relation, we normalize
the edit distance of two prompts into [0, 1] et
bucket the normalized distance into five bins
with intervals of 0.2. We plot a box chart for
each bin to visualize the distribution of prediction
divergence in Figure 3, with the green triangles
representing mean values and the green bars in
the box representing median values. As the edit
distance becomes larger, the divergence increases,
which confirms our intuition that very different
prompts tend to cause different prediction results.
The Pearson correlation coefficient is 0.25, lequel
shows that there is a weak correlation between
these two quantities.

Performance on Google-RE We also report
the performance of optimized ensemble on the
Google-RE subset in Table 10. Encore, ensembling
diverse prompts improves accuracies for both the
BERT-base and BERT-large models. The gains
are somewhat smaller than those on the T-REx
subset, which might be caused by the fact that there
are only three relations and one of them (predicting
the birth date of a person) is particularly hard
to the extent that only one prompt yields non-zero
accuracy.

431

POS-based Analysis Next, we try to examine
which types of prompts tend to be effective
in the abstract by examining the part-of-speech
(POS) patterns of prompts that successfully
extract knowledge from LMs. In open information
extraction systems (Banko et al., 2007), manually
defined patterns are often leveraged to filter out
noisy relational phrases. Par exemple, ReVerb
(Fader et al., 2011) incorporates three syntactic
constraints listed in Table 11 to improve the
coherence and informativeness of
the mined
relational phrases. To test whether these patterns
are also indicative of the ability of a prompt
to retrieve knowledge from LMs, we use these
three patterns to group prompts generated by our
methods into four clusters, where the ‘‘other’’
cluster contains prompts that do not match any
pattern. We then calculate the rank of each
prompt within the extracted prompts, and plot the
distribution of rank using box plots in Figure 4.8
We can see that the average rank of prompts
matching these patterns is better than those in
the ‘‘other’’ group, confirming our intuitions
that good prompts should conform with those
motifs. Some of the best performing prompts’
POS signatures are ‘‘x VBD VBN IN y’’ (par exemple.,
‘‘x was born in y’’) and ‘‘x VBZ DT NN IN y’’
(par exemple., ‘‘x is the capital of y’’).

Cross-model Consistency Finally,
is of
que
to know whether
interest
we are extracting are highly tailored to a

the prompts

it

8We use the ranking position of a prompt to represent its
quality instead of its accuracy because accuracy distributions
of different relations might span different ranges, making
accuracy not directly comparable across relations.

Test
Train

BERT-base
grand
base

BERT-large
base
grand

Test
Train

BERT

ERNIE

BERT ERNIE ERNIE BERT

Mine
Mine+Man
Mine+Para
Man+Para

38.9
39.6
36.2
37.3

38.7
40.1
35.6
35.6

43.7
43.9
40.1
38.8

42.2
42.2
39.0
37.5

Mine
Mine+Man
Mine+Para
Man+Para

38.9
39.6
36.2
37.3

38.0
39.5
34.2
35.2

42.3
43.8
40.1
41.1

38.7
40.5
39.0
40.3

Tableau 12: Cross-model micro-averaged
accuracy (%). The first row is the model
to test, and the second row is the model
on which prompt weights are learned.

Tableau 13: Cross-model micro-averaged accuracy
(%). The first row is the model to test, et le
second row is the model on which prompt weights
are learned.

specific model, or whether they can generalize
across models. To do so, we use two settings:
One compares BERT-base and BERT-large, le
same model architecture with different sizes;
the other compares BERT-base and ERNIE,
different model architectures with a comparable
size. In each setting, we compare when the
optimization-based ensembles are trained on the
same model, or when they are trained on one
model and tested on the other. As shown in
13, we found that in general
Tables 12 et
there is usually some drop in performance
in the cross-model scenario (third and fifth
columns), but the losses tend to be small, et
the highest performance when querying BERT-
base is actually achieved by the weights optimized
on BERT-large. Notably, the best accuracies of
40.1% et 42.2% (Tableau 12) et 39.5% et
40.5% (Tableau 13) with the weights optimized on
the other model are still much higher than those
obtained by the manual prompts, indicating that
optimized prompts still afford large gains across
models. Another interesting observation is that the
drop in performance on ERNIE (last two columns
in Table 13) is larger than that on BERT-large
(last two columns in Table 12) using weights
optimized on BERT-base, indicating that models
sharing the same architecture benefit more from
the same prompts.

Linear vs. Log-linear Combination As men-
tioned in § 4.2, we use log-linear combination of
probabilities in our main experiments. Cependant, it
is also possible to calculate probabilities through
regular linear interpolation:

P. (oui|X, r) =

K

X
je = 1

1
K

PLM(oui|X, tr,je)

(4)

432

Chiffre 5: Performance of two interpolation methods.

We compare these two ways to combine pre-
dictions from multiple mined prompts in Figure 5
(§ 4.2). We assume that log-linear combination
outperforms linear combination because log prob-
abilities make it possible to penalize objects that
are very unlikely given any certain prompt.

6 Omitted Design Elements

Enfin, in addition to the elements of our main
proposed methodology in § 3 and § 4, nous
experimented with a few additional methods that
did not prove highly effective, and thus were
omitted from our final design. We briefly describe
these below, along with cursory experimental
résultats.

6.1 LM-aware Prompt Generation

We examined methods to generate prompts by
solving an optimization problem that maximizes
the probability of producing the ground-truth
objects with respect to the prompts:

t∗
r = arg max

tr

PLM(oui|X, tr),

where PLM(oui|X, tr) is parameterized with a pre-
trained LM. Autrement dit, this method directly
searches for a prompt that causes the LM to assign
ground-truth objects the highest probability.

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

e
d
toi

/
t

un
c
je
/

je

un
r
t
je
c
e

p
d

F
/

d
o

je
/

.

1
0
1
1
6
2

/
t

je

un
c
_
un
_
0
0
3
2
4
1
9
2
3
8
6
7

/

/
t

je

un
c
_
un
_
0
0
3
2
4
p
d

.

F

b
oui
g
toi
e
s
t

t

o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3

Prompts Top1 Top3 Top5 Opti. Oracle

Features

Mine

Paraphrase

before
after

31.9
30.2

34.5
32.5

33.8
34.7

38.1
37.5

47.9
50.8

Tableau 14: Micro-averaged accuracy (%) before
and after LM-aware prompt fine-tuning.

macro micro macro micro

avant

+backward

38.1
38.2

25.2
25.5

37.3
37.4

25.0
25.2

Tableau 15: Performance (%) of using forward
and backward features with BERT-base.

Solving this problem of finding text sequences
that optimize some continuous objective has been
studied both in the context of end-to-end sequence
generation (Hoang et al., 2017), and in the context
of making small changes to an existing input for
adversarial attacks (Ebrahimi et al., 2018; Wallace
et coll., 2019). Cependant, we found that directly
optimizing prompts guided by gradients was
unstable and often yielded prompts in unnatural
English in our preliminary experiments. Ainsi, nous
instead resorted to a more straightforward hill-
climbing method that starts with an initial prompt,
then masks out one token at a time and replaces
it with the most probable token conditioned on
the other tokens, inspired by the mask-predict
decoding algorithm used in non-autoregressive
machine translation (Ghazvininejad et al., 2019):9

PLM(wi|tr \ je) = Phx,yi∈R PLM(wi|X, tr \ je, oui)

|R.|

,

where wi is the i-th token in the prompt and tr \ je
is the prompt with the i-th token masked out. Nous
followed a simple rule that modifies a prompt from
left to right, and this is repeated until convergence.

We used this method to refine all the mined
and manual prompts on the T-REx-train dataset,
and display theirperformance on the T-REx dataset
in Table 14. After fine-tuning, the oracle perfor-
mance increased significantly, while the ensemble
les performances (both rank-based and optimization-
based) dropped slightly. Ce
indicates that
LM-aware fine-tuning has the potential to discover
better prompts, but some portion of the refined
prompts may have over-fit to the training set upon
which they were optimized.

9En théorie, this algorithm can be applied to both masked
LMs like BERT and traditional left-to-right LMs, since the
masked probability can be computed using Bayes’ theorem
for traditional LMs. Cependant, in practice, due to the large size
of vocabulary, it can only be approximated with beam search,
or computed with more complicated continuous optimization
algorithms (Hoang et al., 2017).

6.2 Forward and Backward Probabilities

the model

Enfin, given class imbalance and the propensity
de
the majority
to over-predict
objet, we examine a method to encourage
to predict subject-object pairs that
the model
are more strongly aligned.
Inspired by the
maximum mutual information objective used in
Li et al. (2016un), we add the backward log
probability log PLM(X|oui, tr,je) of each prompt
to our optimization-based scoring function in
Équation 3. Due to the large search space for
objets, we turn to an approximation approach
that only computes backward probability for the
most probable B objects given by the forward
probability at both training and test time. Comme
shown in Table 15, the improvement resulting
from backward probability is small, indicating
that a diversity-promoting scoring function might
not be necessary for knowledge retrieval from
LMs.

7 Related Work

Much work has focused on understanding the
internal representations in neural NLP models
(Belinkov and Glass, 2019), either by using
extrinsic probing tasks to examine whether
certain linguistic properties can be predicted
from those representations (Shi et al., 2016;
Linzen et al., 2016; Belinkov et al., 2017),
or by ablations to the models to investigate
how behavior varies (Li et al., 2016b; Forgeron
et coll., 2017). For contextualized representations
in particular, a broad suite of NLP tasks are
used to analyze both syntactic and semantic
properties, providing evidence that contextualized
representations encode linguistic knowledge in
layers (Hewitt and Manning, 2019;
different
Tenney et al., 2019un; Tenney et al., 2019b;
Jawahar et al., 2019; Goldberg, 2019).

Different from analyses probing the representa-
tions themselves, our work follows Petroni et al.
(2019); P¨orner et al. (2019) in probing for factual

433

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

e
d
toi

/
t

un
c
je
/

je

un
r
t
je
c
e

p
d

F
/

d
o

je
/

.

1
0
1
1
6
2

/
t

je

un
c
_
un
_
0
0
3
2
4
1
9
2
3
8
6
7

/

/
t

je

un
c
_
un
_
0
0
3
2
4
p
d

.

F

b
oui
g
toi
e
s
t

t

o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3

connaissance. They use manually defined prompts,
which may be under-estimating the true perfor-
mance obtainable by LMs. Concurrently to this
travail, Bouraoui et al. (2020) made a similar obser-
vation that using different prompts can help better
extract relational knowledge from LMs, mais ils
use models explicitly trained for relation extrac-
tion whereas our methods examine the knowledge
included in LMs without any additional training.
Orthogonally, some previous works integrate
external knowledge bases so that the language
generation process is explicitly conditioned on
symbolic knowledge (Ahn et al., 2016; Yang et al.,
2017; Logan et al., 2019; Hayashi et al., 2020).
Similar extensions have been applied to pre-trained
LMs like BERT, where contextualized representa-
tions are enhanced with entity embeddings (Zhang
et coll., 2019; Peters et al., 2019; P¨orner et al., 2019).
In contrast, we focus on better knowledge re-
trieval through prompts from LMs as-is, without
modifying them.

8 Conclusion

In this paper, we examined the importance of
the prompts used in retrieving factual knowledge
from language models. We propose mining-based
and paraphrasing-based methods to systematically
generate diverse prompts to query specific pieces
of relational knowledge. Those prompts, quand
combined together, improve factual knowledge
retrieval accuracy by 8%, outperforming manually
designed prompts by a large margin. Our analysis
indicates that LMs are indeed more knowledgeable
than initially indicated by previous results, mais
they are also quite sensitive to how we query
eux. This indicates potential future directions
tel que (1) more robust LMs that can be queried
in different ways but still return similar results,
(2) methods to incorporate factual knowledge
in LMs, et (3)
dans
further
optimizing methods to query LMs for knowledge.
Enfin, we have released all our
learned
prompts to the community as the LM Prompt
and Query Archive (LPAQA), available at:
https://github.com/jzbjyb/LPAQA.

improvements

Remerciements

Pengcheng Yin, and Shuyan Zhou for
insightful comments and suggestions.

their

Les références

Eugene Agichtein and Luis Gravano. 2000.
Snowball: Extracting relations from large plain-
text collections. In Proceedings of the Fifth
ACM Conference on Digital Libraries, Juin
2-7, 2000, San Antonio, TX, Etats-Unis, pages 85–94.
ACM.

Sungjin Ahn, Heeyoul Choi, Tanel P¨arnamaa,
and Yoshua Bengio. 2016. A neural knowledge
language model. CoRR, abs/1608.00318v2.

Livio Baldini Soares, Nicholas FitzGerald, Jeffrey
Ling, and Tom Kwiatkowski. 2019. Matching
the blanks: Distributional similarity for relation
learning. In Proceedings of the 57th Annual
Meeting of the Association for Computational
Linguistics, pages 2895–2905, Florence, Italy.
Association for Computational Linguistics.

Michele Banko, Michael J. Cafarella, Stephen
Soderland, Matthew Broadhead, and Oren
Etzioni. 2007. Open information extraction
from the web. In IJCAI 2007, Procédure
de
the 20th International Joint Conference
on Artificial Intelligence, Hyderabad, India,
Janvier 6-12, 2007, pages 2670–2676.

Yonatan Belinkov, Nadir Durrani, Fahim Dalvi,
Hassan Sajjad, and James Glass. 2017. What
do neural machine translation models learn
about morphology? In Proceedings of
le
55th Annual Meeting of the Association for
Computational Linguistics (Volume 1: Long
Papers), pages 861–872, Vancouver, Canada.
Association for Computational Linguistics.

Yonatan Belinkov and James R. Verre. 2019.
Analysis methods in neural language process-
ing: A survey. Transactions of the Association
for Computational Linguistics, 7:49–72.

This work was supported by a gift from Bosch
Research and NSF award no. 1815287. We would
like to thank Paul Michel, Hiroaki Hayashi,

Rahul Bhagat and Deepak Ravichandran. 2008.
Large scale acquisition of paraphrases for
In Proceedings
learning surface patterns.

434

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

e
d
toi

/
t

un
c
je
/

je

un
r
t
je
c
e

p
d

F
/

d
o

je
/

.

1
0
1
1
6
2

/
t

je

un
c
_
un
_
0
0
3
2
4
1
9
2
3
8
6
7

/

/
t

je

un
c
_
un
_
0
0
3
2
4
p
d

.

F

b
oui
g
toi
e
s
t

t

o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3

ACL-08:

de
pages
Columbus, Ohio. Association
putational Linguistics.

HLT,

674–682,
for Com-

Zied Bouraoui, Jose Camacho-Collados, et
Steven Schockaert. 2020. Inducing relational
knowledge from BERT. In Thirty-Fourth AAAI
Conference on Artificial Intelligence (AAAI),
New York, Etats-Unis.

Claudio Carpineto and Giovanni Romano. 2012.
A survey of automatic query expansion
in information retrieval. ACM, Computing
Surveys, 44(1):1:1–1:50.

in Neural

Andrew M. Dai and Quoc V. Le. 2015.
In Ad-
Semi-supervised sequence learning.
Information Processing
vances
Systems 28: Annual Conference on Neural
Information Processing Systems 2015, Decem-
ber 7-12, 2015, Montréal, Québec, Canada,
pages 3079–3087.

Jacob Devlin, Ming-Wei Chang, Kenton Lee,
and Kristina Toutanova. 2019. BERT: Pre-
training of deep bidirectional transformers for
In Proceedings of
language understanding.
le 2019 Conference of the North American
Chapter of the Association for Computational
Linguistics: Human Language Technologies,
NAACL-HLT 2019, Minneapolis, MN, Etats-Unis,
Juin 2-7, 2019, Volume 1 (Long and Short
Papers), pages 4171–4186.

text classification.

Javid Ebrahimi, Anyi Rao, Daniel Lowd, et
Dejing Dou. 2018. HotFlip: White-box adver-
sarial examples for
Dans
Proceedings of the 56th Annual Meeting of
the Association for Computational Linguis-
tics (Volume 2: Short Papers), pages 31–36,
Melbourne, Australia, Association for Compu-
tational Linguistics.

Hady ElSahar, Pavlos Vougiouklis, Arslen
Remaci, Christophe Gravier, Jonathon S. Hare,
Fr´ed´erique Laforest, and Elena Simperl. 2018.
T-REx: A large scale alignment of natural
language with knowledge base triples.
Dans
the Eleventh International
Proceedings of
Conference on Language Resources and
Evaluation, LREC 2018, Miyazaki, Japan, May
7-12, 2018.

information extraction. In Proceedings of the
2011 Conference on Empirical Methods in
Natural Language Processing, EMNLP 2011,
27-31 Juillet 2011, John McIntyre Conference
Centre, Édimbourg, ROYAUME-UNI, A meeting of SIGDAT,
the ACL,
Interest Group of
a Special
pages 1535–1545.

Michael Gamon, Anthony Aue, and Martine
Smets. 2005. Sentence-level MT evaluation
without reference translations: Beyond lan-
guage modeling. In Proceedings of EAMT,
pages 103–111.

In Proceedings of

Marjan Ghazvininejad, Omer Levy, Yinhan Liu,
and Luke Zettlemoyer. 2019. Mask-predict:
conditional masked
Parallel decoding of
le
language models.
2019 Conference on Empirical Methods in
Natural Language Processing and the 9th
International Joint Conference on Natural
Language
(EMNLP-IJCNLP),
pages
6114–6123, Hong Kong, Chine.
Association for Computational Linguistics.

Processing

Yoav Goldberg.

2019. Assessing BERT’s

syntactic abilities. CoRR, abs/1901.05287v1.

Hiroaki Hayashi, Zecong Hu, Chenyan Xiong, et
Graham Neubig. 2020. Latent relation language
models. In Thirty-Fourth AAAI Conference on
Artificial Intelligence (AAAI), New York, Etats-Unis.

John Hewitt and Christopher D. Manning. 2019.
A structural probe for finding syntax in word
representations. In Proceedings of the 2019
Conference of the North American Chapter of
the Association for Computational Linguistics:
Human Language Technologies, NAACL-HLT
2019, Minneapolis, MN, Etats-Unis, Juin 2-7,
2019, Volume 1 (Long and Short Papers),
pages 4129–4138.

Cong Duy Vu Hoang, Gholamreza Haffari, et
Trevor Cohn. 2017. Towards decoding as
continuous optimisation in neural machine
translation. In Proceedings of the 2017 Con-
in Natu-
ference on Empirical Methods
ral Language Processing, pages 146–156,
Copenhagen, Denmark. Association for Com-
putational Linguistics.

Anthony Fader, Stephen Soderland, and Oren
Etzioni. 2011. Identifying relations for open

Robert L. Logan IV, Nelson F. Liu, Matthew E.
Peters, Matt Gardner, and Sameer Singh.

435

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

e
d
toi

/
t

un
c
je
/

je

un
r
t
je
c
e

p
d

F
/

d
o

je
/

.

1
0
1
1
6
2

/
t

je

un
c
_
un
_
0
0
3
2
4
1
9
2
3
8
6
7

/

/
t

je

un
c
_
un
_
0
0
3
2
4
p
d

.

F

b
oui
g
toi
e
s
t

t

o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3

the 57th Conference of

2019. Barack’s wife Hillary: Using knowledge
graphs for fact-aware language modeling. Dans
Proceedings of
le
Association for Computational Linguistics,
ACL
28-
Août 2, 2019, Volume 1: Long Papers,
pages 5962–5971.

2019, Florence,

Italy,

Juillet

Ganesh Jawahar, Benoˆıt Sagot, and Djam´e
Seddah. 2019. What does BERT learn about
langue? In Proceedings
the structure of
de
the Association
for Computational Linguistics, ACL 2019,
Florence, Italy, Juillet 28- Août 2, 2019, Volume
1: Long Papers, pages 3651–3657.

the 57th Conference of

Diederik P. Kingma and Jimmy Ba. 2015. Adam:
A method for stochastic optimization. In 3rd
International Conference on Learning Repre-
sentations, ICLR 2015, San Diego, Californie, Etats-Unis,
May 7-9, 2015, Conference Track Proceedings.

fonction

objective

Jiwei Li, Michel Galley, Chris Brockett, Jianfeng
Gao, and Bill Dolan. 2016un. A diversity-
promoting
for neural
conversation models. In NAACL HLT 2016,
Le 2016 Conference of the North American
Chapter of the Association for Computational
Linguistics: Human Language Technologies,
San Diego California, Etats-Unis, Juin 12-17, 2016,
pages 110–119.

Jiwei Li, Will Monroe, and Dan Jurafsky. 2016b.
Understanding neural networks through repre-
sentation erasure. CoRR, abs/1612.08220v3.

Tal Linzen, Emmanuel Dupoux, and Yoav
Goldberg. 2016. Assessing the ability of LSTMs
to learn syntax-sensitive dependencies. Trans-
actions of the Association for Computational
Linguistics, 4:521–535.

Jonathan Mallinson, Rico Sennrich, and Mirella
Lapata. 2017. Paraphrasing revisited with
neural machine translation. In Proceedings of
the 15th Conference of the European Chapter of
the Association for Computational Linguistics:
Volume 1, Long Papers, pages 881–893,
Valencia, Espagne. Association for Computational
Linguistics.

Bryan McCann, Nitish Shirish Keskar, Caiming
Xiong, and Richard Socher. 2018. The natural
language decathlon: Multitask learning as
question answering. CoRR, abs/1806.08730v1.

436

Oren Melamud, Jacob Goldberger, and Ido Dagan.
2016. context2vec: Learning generic context
embedding with bidirectional LSTM.
Dans
Proceedings of the 20th SIGNLL Conference
on Computational Natural Language Learning,
CoNLL 2016, Berlin, Allemagne, Août 11-12,
2016, pages 51–61.

G´abor Melis, Chris Dyer, and Phil Blunsom.
2018. On the state of the art of evaluation
in neural language models. In 6th International
Conference on Learning Representations, ICLR
2018, Vancouver, BC, Canada, Avril 30 – May
3, 2018, Conference Track Proceedings.

Stephen Merity, Nitish Shirish Keskar, et
Richard Socher. 2018. Regularizing and opti-
In 6th
mizing LSTM language models.
International Conference on Learning Rep-
resentations,
ICLR 2018, Vancouver, BC,
Canada, Avril 30 – May 3, 2018, Conference
Track Proceedings.

Tomas Mikolov and Geoffrey Zweig. 2012.
Context dependent recurrent neural network
Dans 2012 IEEE Spoken
language model.
Language
(SLT),
pages 234–239. IEEE.

Technology Workshop

Nathan Ng, Kyra Yee, Alexei Baevski, Myle
Ott, Michael Auli, and Sergey Edunov. 2019.
Facebook FAIR’s WMT19 news translation
task submission. In Proceedings of the Fourth
Conference on Machine Translation, WMT
2019, Florence, Italy, Août 1-2, 2019 –
Volume 2: Shared Task Papers, Day 1,
pages 314–319.

Matthew E. Peters, Mark Neumann, Mohit
Iyyer, Matt Gardner, Christopher Clark, Kenton
Lee, and Luke Zettlemoyer. 2018. Deep con-
textualized word representations. In Proceed-
ings of
the North
American Chapter of the Association for Com-
putational Linguistics: Human Language Tech-
nologies, NAACL-HLT 2018, La Nouvelle Orléans,
Louisiana, Etats-Unis, Juin 1-6, 2018, Volume 1
(Long Papers), pages 2227–2237.

le 2018 Conference of

Matthew E. Peters, Mark Neumann, Robert
Logan, Roy Schwartz, Vidur Joshi, Sameer
Singh, and Noah A. Forgeron. 2019. Knowl-
edge enhanced contextual word representations.
le 2019 Conference
In Proceedings of

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

e
d
toi

/
t

un
c
je
/

je

un
r
t
je
c
e

p
d

F
/

d
o

je
/

.

1
0
1
1
6
2

/
t

je

un
c
_
un
_
0
0
3
2
4
1
9
2
3
8
6
7

/

/
t

je

un
c
_
un
_
0
0
3
2
4
p
d

.

F

b
oui
g
toi
e
s
t

t

o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3

on Empirical Methods
in Natural Lan-
guage Processing and the 9th International
Joint Conference on Natural Language Pro-
cessation (EMNLP-IJCNLP), pages 43–54, Hong
Kong, Chine. Association for Computational
Linguistics.

Fabio Petroni, Tim Rockt¨aschel, Sebastian Riedel,
Patrick Lewis, Anton Bakhtin, Yuxiang Wu,
and Alexander Miller. 2019. Language models
as knowledge bases? In Proceedings of the
2019 Conference on Empirical Methods in
Natural Language Processing and the 9th
International Joint Conference on Natural
Language
(EMNLP-IJCNLP),
pages
2463–2473, Hong Kong, Chine.
Association for Computational Linguistics.

Processing

Nina P¨orner, Ulli Waltinger,

and Hinrich
Sch¨utze. 2019. BERT is not a knowledge
base (yet): Factual knowledge vs. Nom-
based reasoning in unsupervised QA. CoRR,
abs/1911.03681v1.

Alec Radford, Jeffrey Wu, Rewon Child, David
Luan, Dario Amodei, and Ilya Sutskever. 2019.
Language models are unsupervised multitask
learners. OpenAI Blog, 1(8).

Nazneen Fatema Rajani, Bryan McCann,
Caiming Xiong, and Richard Socher. 2019.
Explain yourself! Leveraging language models
for commonsense reasoning. In Proceedings of
the 57th Annual Meeting of the Association for
Computational Linguistics, pages 4932–4942,
Florence, Italy. Association for Computational
Linguistics.

Deepak Ravichandran and Eduard Hovy. 2002.
Learning surface text patterns for a ques-
In Proceedings of
tion answering system.
the 40th annual meeting on association for
computational
linguistics,
41–47.
Association for Computational Linguistics.

pages

Lorenza Romano, Milen Kouylekov,

Idan
Szpektor, Ido Dagan, and Alberto Lavelli.
Investigating a generic paraphrase-
2006.
based approach for
Dans
11th Conference of the European Chapter of
the Association for Computational Linguistics,
Trento, Italy. Association for Computational
Linguistics.

relation extraction.

437

Maarten Sap, Ronan Le Bras, Emily Allaway,
Chandra Bhagavatula, Nicholas Lourie, Hannah
Rashkin, Brendan Roof, Noah A. Forgeron,
and Yejin Choi. 2019. Atomic: An atlas of
machine commonsense for if-then reasoning.
the AAAI Conference
In Proceedings of
sur
Artificial
33,
pages 3027–3035.

Intelligence,

volume

Rico Sennrich, Barry Haddow, and Alexandra
Birch. 2016.
Improving neural machine
translation models with monolingual data. Dans
Proceedings of the 54th Annual Meeting of
the Association for Computational Linguistics,
ACL 2016, Août 7-12, 2016, Berlin, Allemagne,
Volume 1: Long Papers.

Xing Shi, Inkit Padhi, and Kevin Knight. 2016.
Does string-based neural MT learn source
syntax? In Proceedings of the 2016 Conference
on Empirical Methods in Natural Language
Processing, pages 1526–1534, Austin, Texas.
Association for Computational Linguistics.

Noah A. Forgeron, Chris Dyer, Miguel Ballesteros,
Graham Neubig, Lingpeng Kong,
et
Adhiguna Kuncoro. 2017. What do recurrent
neural network grammars learn about syntax?
the 15th Conference of
In Proceedings of
the European Chapter of
the Association
for Computational Linguistics, EACL 2017,
Valencia, Espagne, Avril 3-7, 2017, Volume 1:
Long Papers, pages 1249–1258.

Ian Tenney, Dipanjan Das, and Ellie Pavlick.
2019un. BERT rediscovers the classical NLP
pipeline. In Proceedings of the 57th Conference
de
for Computational
Linguistics, ACL 2019, Florence, Italy, Juillet
28- Août 2, 2019, Volume 1: Long Papers,
pages 4593–4601.

Association

le

Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang,
Adam Poliak, R.. Thomas McCoy, Najoung
Kim, Benjamin Van Durme, Samuel R.
Bowman, Dipanjan Das, and Ellie Pavlick.
2019b. What do you learn from context?
Probing for sentence structure in contextualized
In 7th International
word representations.
Conference on Learning Representations, ICLR
2019, La Nouvelle Orléans, LA, Etats-Unis, May 6-9, 2019.

Kristina Toutanova, Danqi Chen, Patrick Pantel,
Hoifung Poon, Pallavi Choudhury, and Michael

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

e
d
toi

/
t

un
c
je
/

je

un
r
t
je
c
e

p
d

F
/

d
o

je
/

.

1
0
1
1
6
2

/
t

je

un
c
_
un
_
0
0
3
2
4
1
9
2
3
8
6
7

/

/
t

je

un
c
_
un
_
0
0
3
2
4
p
d

.

F

b
oui
g
toi
e
s
t

t

o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3

pour

joint
Gamon. 2015. Representing text
text and knowledge bases.
embedding of
le 2015 Conference on
In Proceedings of
Empirical Methods
in Natural Language
Processing, EMNLP 2015, Lisbon, Portugal,
Septembre 17-21, 2015, pages 1499–1509.

Trieu H. Trinh and Quoc V. Le. 2018. A simple
method for commonsense reasoning. CoRR,
abs/1806.02847v2.

Eric Wallace, Shi Feng, Nikhil Kandpal, Matt
Gardner, and Sameer Singh. 2019. Universal
adversarial triggers for attacking and analyzing
NLP. In Proceedings of the 2019 Conference
on Empirical Methods in Natural Language
Processing and the 9th International Joint
Conference on Natural Language Processing
(EMNLP-IJCNLP), pages 2153–2162, Hong
Kong, Chine. Association for Computational
Linguistics.

Zichao Yang, Phil Blunsom, Chris Dyer, et
Wang Ling. 2017. Reference-aware language
models. In Proceedings of
le 2017 Con-
ference on Empirical Methods in Natural
Language Processing, EMNLP 2017, Copen-
hagen, Denmark, Septembre 9-11, 2017,
pages 1850–1859.

Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin
Jiang, Maosong Sun, and Qun Liu. 2019.
ERNIE: Enhanced language representation
In Proceedings
with informative entities.
the Association
the 57th Conference of
de
for Computational Linguistics, ACL 2019,
Florence, Italy, Juillet 28- Août 2, 2019, Volume
1: Long Papers, pages 1441–1451.

Geoffrey Zweig and Christopher J. C. Burges.
2011. The Microsoft Research sentence
completion challenge. Microsoft Research,
Redmond, WA, Etats-Unis, Technical Report MSR-
TR-2011-129.

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

e
d
toi

/
t

un
c
je
/

je

un
r
t
je
c
e

p
d

F
/

d
o

je
/

.

1
0
1
1
6
2

/
t

je

un
c
_
un
_
0
0
3
2
4
1
9
2
3
8
6
7

/

/
t

je

un
c
_
un
_
0
0
3
2
4
p
d

.

F

b
oui
g
toi
e
s
t

t

o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3

438
Télécharger le PDF