Amnesic Probing: Behavioral Explanation with Amnesic Counterfactuals
Yanai Elazar1,2 Shauli Ravfogel1,2 Alon Jacovi1 Yoav Goldberg1,2
1Computer Science Department, 巴伊兰大学
2Allen Institute for Artificial Intelligence
{yanaiela,shauli.ravfogel,alonjacovi,yoav.goldberg}@gmail.com
抽象的
A growing body of work makes use of probing
in order to investigate the working of neural
型号, often considered black boxes. 最近,
an ongoing debate emerged surrounding the
limitations of the probing paradigm. 在这个
工作, we point out the inability to infer beha-
vioral conclusions from probing results, 和
offer an alternative method that focuses on
how the information is being used, 而不是
on what information is encoded. Our method,
Amnesic Probing, follows the intuition that the
utility of a property for a given task can be as-
sessed by measuring the influence of a causal
intervention that removes it from the represen-
站. Equipped with this new analysis tool,
we can ask questions that were not possible
前, 例如, is part-of-speech informa-
tion important for word prediction? We per-
form a series of analyses on BERT to answer
these types of questions. Our findings demon-
strate that conventional probing performance
is not correlated to task importance, 和我们
call for increased scrutiny of claims that draw
behavioral or causal conclusions from probing
results.1
1
介绍
What drives a model to perform a specific pre-
措辞? 什么
information is being used for
prediction, and what would have happen if that
information went missing? Because neural repre-
sentation is opaque and hard to interpret, 回答-
ing these questions is challenging.
The recent advancements in Language Models
(LMs) and their success in transfer learning of
many NLP tasks (例如, Peters et al., 2018; Devlin
等人。, 2019; 刘等人。, 2019乙) spiked interest
1The code is available at: https://github.com
/yanaiela/amnesic probing.
160
in understanding how these models work and what
is being encoded in them. One prominent meth-
odology that attempts to shed light on those ques-
tions is probing (Conneau et al., 2018) (还
known as auxilliary prediction [Adi et al., 2016]
and diagnostic classification [Hupkes et al., 2018]).
Under this methodology, one trains a simple model
—a probe—to predict some desired information
from the latent representations of the pre-trained
模型. High prediction performance is interpreted
as evidence for the information being encoded
in the representation. A key drawback of such an
approach is that while it may indicate that the
information can be extracted from the represen-
站, it provides no evidence for or against the
actual use of this information by the model.
的确, Hewitt and Liang (2019) have shown that
under certain conditions, above-random probing
accuracy can be achieved even when the infor-
mation that one probes for is linguistically mean-
ingless noise, which is unlikely to have any use
by the actual model. 最近, Ravichander
等人. (2020) showed that models encode linguis-
tic properties, even when not required at all for
solving the task, questioning the usefulness and
common interpretation of probing. These results
call for higher scrutiny of causal claims based on
probing results.
在本文中, we propose a counterfactual ap-
proach that serves as a step towards causal at-
贡品: Amnesic Probing (见图 1 for a
schematic view). We build on the intuition that
if a property Z (例如, part-of-speech) is being
used for a task T (例如, 语言建模), 然后
the removal of Z should negatively influence the
ability of the model to solve the task. 反过来,
when the removal of Z has little or no influence on
the ability to solve T , one can argue that knowing
Z is not a significant contributing factor in the
strategy the model employs in solving T .
As opposed to previous work that focused on
intervention in the input space (Goyal et al., 2019;
计算语言学协会会刊, 卷. 9, PP. 160–175, 2021. https://doi.org/10.1162/tacl 00359
动作编辑器: Radu Florian. 提交批次: 7/2020; 修改批次: 9/2020; 已发表 3/2021.
C(西德:2) 2021 计算语言学协会. 根据 CC-BY 分发 4.0 执照.
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
t
A
C
我
/
我
A
r
t
我
C
e
–
p
d
F
/
d
哦
我
/
.
1
0
1
1
6
2
/
t
我
A
C
_
A
_
0
0
3
5
9
1
9
2
4
1
8
9
/
/
t
我
A
C
_
A
_
0
0
3
5
9
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
7
S
e
p
e
米
乙
e
r
2
0
2
3
where one removes some component and mea-
sures the influence of that intervention.
We study several
linguistic properties such
as part-of-speech (销售点) and dependency labels.
全面的, we find that as opposed to the common
信仰, high probing performance does not mean
that the probed information is used for predicting
the main task (§4). This is consistent with the
recent findings of Ravichander et al. (2020). 我们的
analysis also reveals that the properties we exam-
ine are often being used differently in the masked
环境 (which is mostly used in LM training) 和
in the non-masked setting (which is commonly
used for probing or fine-tuning) (§5). We then
dive deeper into a more fine-grained analysis, 和
show that not all of the linguistic property labels
equally influence prediction (§6). 最后, we re-
evaluate previous claims about the way that BERT
process the traditional NLP pipeline (Tenney et al.,
2019A) with amnesic probing and provide a novel
interpretation on the utility of different layers (§7).
2 Amnesic Probing
2.1 Setup and Formulation
Given a set of labeled data of data points X =
2 and task labels Y = y1, . . . , yn we
x1, . . . , xn
analyze a model f that predicts the labels Y
from X: ˆyi = f (希). We assume that this model
is composed of two parts: an encoder h that
transforms input xi into a representation vector
hxi and a classifier c that is used for predicting ˆyi
based on hxi: ˆyi = c(H(希)). We refer by model to
the component that follows the encoding function
h and is used for the classification of the task of
interest y. Each data point xi is also associated
with a property of interest zi which represents
additional information, which may or may not
affect the decision of the classifier c.
在这项工作中, we are interested in the change in
prediction of the classifier c on the prediction ˆyi
which is caused due to the removal of the property
Z from the representation h(希), that is h(希)¬Z.
2.2 Amnesic Probing with INLP
Under the counterfactual approach, we aim to
evaluate the behavioral influence of a specific
type of information Z (例如, 销售点) on some task
2The data points can be words, 文件, 图片, ETC。,
based on the application.
数字 1: A schematic description of the proposed
amnesic intervention: We transform the contextualized
representation of the word ‘‘ran’’ so as to remove
信息 (这里, 销售点), resulting in a ‘‘cleaned’’
version h¬P OS
ran . This representation is fed to the word-
prediction layer and the behavioral influence of POS
erasure is measured.
Kaushik et al., 2020; Vig et al., 2020) or in specific
神经元 (Vig et al., 2020), our intervention is done
on the representation layers. This makes it easier
than changing the input (which is non-trivial) 和
more efficient than querying hundred of neurons
(which become combinatorial when considering
the effect of multiple neurons simultaneously).
We demonstrate that amnesic probing can func-
tion as a debugging and analysis tool for neural
型号. 具体来说, by using amnesic probing
we show how to deduce whether a property is
used by a given model in prediction.
In order to build the counterfactual represen-
tations, we need a function that operates on a pre-
trained representation and returns a counterfactual
version which no longer encodes the property we
focus on. We use the recently proposed algorithm
for neutralizing linear information: Iterative Null-
space Projection (INLP) (Ravfogel et al., 2020).
This approach allows us to ask the counterfactual
问题: ‘‘How will the prediction of a task differ
without access to some property?’’ (Pearl and
Mackenzie, 2018). This approach relies on the as-
sumption that the usefulness of some information
can be measured by neutralizing it from the repre-
sentation, and witnessing the resulting behavioral
改变. It echoes the basic idea of ablation tests
161
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
t
A
C
我
/
我
A
r
t
我
C
e
–
p
d
F
/
d
哦
我
/
.
1
0
1
1
6
2
/
t
我
A
C
_
A
_
0
0
3
5
9
1
9
2
4
1
8
9
/
/
t
我
A
C
_
A
_
0
0
3
5
9
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
7
S
e
p
e
米
乙
e
r
2
0
2
3
(例如, 语言建模). 这样做, we selectively
remove this information from the representation
and observe the change in the behavior of the
model on the main task.
One commonly used method for information
removal relies on adversarial training through the
gradient reversal layer technique (Ganin et al.,
2016). 然而, this techniques requires chang-
ing the original encoding by retraining the model,
which is not desired in our case as we wish to
study the original model’s behavior. 此外,
Elazar and Goldberg (2018)
这
technique does not completely remove all the
information from the learned representation.
found that
反而, we make use of a recently proposed
algorithm called Iterative Nullspace Projection
(INLP) (Ravfogel et al., 2020). Given a labeled
dataset of representations H, and a property to
remove, Z, INLP neutralizes the ability to linearly
predict Z from H. It does so by training a sequence
of linear classifiers (probes) c1, . . . , ck that predict
Z, interpreting each one as conveying information
on a unique direction in the latent space that
corresponds to Z, and iteratively removing each
of these directions. Concretely, we assume that
the ith probe ci is parameterized by a matrix Wi.
In the ith iteration, ci is trained to predict Z from
H 3, and the data is projected onto its nullspace
using a projection matrix PN (Wi). This operation
guarantees WiPN (Wi)H = 0, IE。, it neutralizes
the features in the latent space which were found
by Wi to be indicative to Z. By repeating this
process until no classifier achieves above-majority
准确性, INLP removes all such features.4
Amnesic Probing vs. Probing Note
那
amnesic probing extends conventional probing,
as it is only relevant in cases where the property of
interest can be predicted from the representation.
If a probe gets random accuracy, the information
cannot be used by the model to begin with. 作为
这样的, amnesic probing can be seen as a comple-
mentary method, which inspects probe accuracy as
a first step, but then proceeds to derive behavioral
outcomes from the directions associated with the
probe, with respect to a specific task.
2.3 Controls
The usage of INLP in this setup involves some
subtleties we aim to account for: (1) Any modifi-
cation to the representation, regardless of whether
it removes information necessary to the task, 可能
cause a decrease in performance. Can the drop
in performance be attributed solely to the modi-
fication of the representation? (2) The removal
of any property using INLP may also cause re-
moval of correlating properties. Does the re-
moved information only pertain to the property
in question?
Control Over Information In order to control
for the information loss of the representations, 我们
make use of a baseline that removes the same
number of directions as INLP does, but randomly.
For every INLP iteration the data matrix’s rank
decreases by the number of labels of the inspected
财产. This operation removes information
from the representation which might be used for
prediction. Using this control, Rand, 而不是
finding the directions using a classifier that learned
some task, we generate random vectors from a
uniform distribution, that accounts for random
方向. 然后, we construct the projection ma-
trix as in INLP, by finding the intersection of
nullspaces.
If the Rand impact on performance is lower
than the impact of amnesic probing for some
财产, we conclude that we removed important
directions for the main task. 否则, 当。。。的时候
Rand control has a similar or higher impact, 我们
conclude that there is no evidence for property
usage for the main task.
Control over Selectivity5 The result of the am-
nesic probing is taken as an indication to whether
or not the model we query makes use of the
inspected property for prediction. 然而, 这
removed features might solely correlate with the
财产 (例如, word position in the sentence has
a nonzero correlation to syntactic function). 到
what extent is the information removal process we
employ selective to the property in focus?
We test that by explicitly providing the gold
information that has been removed from the
3Concretely, we use linear SVM (Pedregosa et al., 2011).
4All relevant directions are removed to the extent they
are identified by the classifiers we train. 所以, we run
INLP until the last linear classifier achieves a score within
one point above majority accuracy on the development set.
5不是
to be confused with Hewitt and Liang (2019)
Selectivity. Although recommended to use when performing
standard probing, we argue it does not fit as a control
for amnesic probing and provide a detailed explanation in
附录B.
162
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
t
A
C
我
/
我
A
r
t
我
C
e
–
p
d
F
/
d
哦
我
/
.
1
0
1
1
6
2
/
t
我
A
C
_
A
_
0
0
3
5
9
1
9
2
4
1
8
9
/
/
t
我
A
C
_
A
_
0
0
3
5
9
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
7
S
e
p
e
米
乙
e
r
2
0
2
3
表示, and finetuning the subsequent
layers (while the rest of the network is frozen).
Restoring the original performance is taken as
evidence that the property we aimed to remove
is enough to account for the damage sustained
by the amnesic intervention (it may still be the
case that
the intervention removes unrelated
特性; but given the explicitly-provided pro-
perty information, the model can make up for the
damage). 然而, if the original performance is
not restored, this indicates that the intervention
removed more information than intended, 和
this cannot be accounted for by merely explicitly
providing the value of the single property we
focused on.
Concretely, we concatenate feature vectors of
the studied property to the amnesic representa-
系统蒸发散. Those vectors are 32-dimensional, and are
initialized randomly, with a unique vector for each
value of the property of interest. Those are fine-
tuned until convergence. We note that as the new
representation vectors are of a higher dimension
than the original ones, we cannot use the original
矩阵. For an easier learning process, 我们使用
original embedding matrix and concatenate it with
a new embedding matrix, randomly initialized, 和
treat it as the new decision function.
3.2 Studied Properties
We focus on six tasks of sequence tagging: coarse
and fine-grained part-of-speech tags (c-pos and
f-pos, 分别); syntactic dependency labels
(dep); named-entity labels (ner); and syntactic
constituency boundaries7 that mark the beginning
and the end of a phrase (phrase start, and phrase
结尾, 分别).
We use the training and dev data of the follow-
ing datasets for each task: English UD Treebank
(McDonald et al., 2013) for c-pos, f-pos, and dep;
and English OntoNotes (Weischedel et al., 2013)
for ner, phrase start, and phrase end. For train-
英, 我们用 100,000 random tokens from those
datasets.
3.3 指标
We report the following metrics:
LM accuracy: Word prediction accuracy.
Kullback-Leibler Divergence (DKL): We calcu-
late the DKL between the distribution of the
model over tokens, before and after the amnesic
干涉. This measure focuses on the entire
分配, rather than the correct token only.
Larger values implies a more significant change.
3 Studying BERT: 实验装置
4 To Probe or Not to Probe?
3.1 模型
We use our proposed method to investigate BERT
(Devlin et al., 2019),6 a popular and competitive
masked language model (MLM) that has recently
been the subject of many analysis works (例如,
Hewitt and Manning, 2019; 刘等人。, 2019A;
Tenney et al., 2019A). While most probing works
focus on the ability to decode a certain linguistic
property of the input text from the representation,
we aim to understand which information is being
used by it when predicting words from context.
例如, we seek to answer questions such
as the following: ‘‘Is POS information used by
the model in word prediction?’’ The following
experiments focus on language modeling, 作为一个
basic and popular task, but our method is more
widely applicable.
By using the probing technique, different linguis-
tic phenomenon such as POS, dependency infor-
运动, and NER (Tenney et al., 2019A; 刘等人。,
2019A; Alt et al., 2020) have been found to
be ‘‘easily extractable’’ (typically using linear
probes). A naive interpretation of these results may
conclude that because information can be easily
extracted by the probing model, this information
is being used for the predictions. 我们表明
this is not the case. Some properties such as
syntactic structure and POS are very informative
and are being used in practice to predict words.
然而, we also find some properties, 例如
phrase markers, which the model does not make
use of when predicting tokens, in contrast to what
one can naively deduce from probing results. 这
finding is in line with a recent work that observed
the same behavior (Ravichander et al., 2020).
For each linguistic property, we report the prob-
ing accuracy using a linear model, as well as the
6具体来说, BERT-BASE-UNCASED (沃尔夫等人。, 2019).
7Based on the Penn Treebank syntactic definitions.
163
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
t
A
C
我
/
我
A
r
t
我
C
e
–
p
d
F
/
d
哦
我
/
.
1
0
1
1
6
2
/
t
我
A
C
_
A
_
0
0
3
5
9
1
9
2
4
1
8
9
/
/
t
我
A
C
_
A
_
0
0
3
5
9
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
7
S
e
p
e
米
乙
e
r
2
0
2
3
Properties
Probing
LM-Acc
LM-DKL
氮. dir
氮. 类
Majority
Vanilla
Vanilla
Rand
Selectivity
Amnesic
Rand
Amnesic
dep
738
41
11.44
76.00
94.12
12.31
73.78
7.05
8.11
8.53
f-pos
585
45
13.22
89.50
94.12
56.47
92.68
12.31
4.61
7.63
c-pos
264
12
31.76
92.34
94.12
89.65
97.26
61.92
0.36
3.21
ner
phrase start
phrase end
133
19
86.09
93.53
94.00
92.56
96.06
83.14
0.08
1.24
36
2
59.25
85.12
94.00
93.75
96.96
94.21
0.01
0.01
22
2
58.51
83.09
94.00
93.86
96.93
94.32
0.01
0.01
桌子 1: Property statistics, probing accuracies, and the influence of the amnesic intervention
on the model’s distribution over words. dep: dependency edge identity; f-pos and c-pos:
fine-grained and coarse POS tags; phrase start and phrase end: beginning and end of
短语. Rand refers to replacing our INLP-based projection with removal of an equal
number of random directions from the representation. The number of iterations per task can
be inferred from: N.dir/N.classes.
word prediction accuracy after removing informa-
tion about that property. The results are summa-
rized in Table 1.8 Probing achieves substantially
higher performance over majority across all tasks.
而且, after neutralizing the studied prop-
erty from the representation, the performance on
that task drops to majority (not presented in the
table for brevity). 下一个, we compare the LM
performance before and after the projection and
observe a major drop for dep and f-pos information
(decrease of 87.0 和 81.8 accuracy points,
分别), and a moderate drop for c-pos
and ner information (decrease of 32.2 和 10.8
accuracy points, 分别). For these tasks,
Rand performance on LM-Acc is lower than the
original scores, but substantially higher than the
Amnesic scores. Recall that the Rand experiment
is done with respect to the amnesic probing, 因此
the number of removed dimension is the same, 但
each task may differ in the amount of dimensions
已删除. 此外, the DKL metric shows
the same trend (but in reverse, as a lower value
indicates on a smaller change). We also report the
selectivity results, where in most experiments the
LM performance is restored, indicating amnesic
probing works as expected. Note that the dep
performance is not fully restored,
thus some
non-related features must have been coupled
and removed with the dependency features. 我们
8Note that because we use two different datasets, 这
f-pos, c-pos, and OntoNotes for
the Vanilla LM-Acc
UD Treebank for dep,
ner, phrase-start, and phrase-end,
performance differ between these setups.
改进
believe that this happens in part due to the large
number of removed directions.9 These results
suggests that to a large degree, the damage to
LM performance is to be attributed to the specific
information we remove, and not to rank-reduction
独自的. We conclude that dependency information,
POS and NER are important for word prediction.
有趣的是, for phrase start and phrase end
in accuracy
we observe a small
的 0.21 和 0.32 点, 分别. The per-
formance for the control on these properties is
降低, therefore not only are these properties not
important for the LM prediction at this part of the
模型, they slightly harm it. The last observation
is rather surprising as phrase boundaries are cou-
pled to the structure of sentences, and the words
that form them. A potential explanation for this
phenomenon is that this information is simply
not being used at this part of the model, 并且是
rather being processed in an earlier stage. We fur-
ther inspect this hypothesis in Section 7. 最后,
the probe accuracy does not correlate with task
importance as measured by our method (Spearman
correlation of 8.5, with a p-value of 0.871).
These results strengthen recent works that
question the usefulness of probing as an analysis
tool (Hewitt and Liang, 2019; Ravichander et al.,
9Since this experiment involves additional fine-tuning
and is not entirely comparable to the vanilla setup (also due
to the additional explicit information), we also experiment
with concatenating the inspected features and finetuning. 这
results in an improvement of 3-4 点, above the vanilla
实验.
164
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
t
A
C
我
/
我
A
r
t
我
C
e
–
p
d
F
/
d
哦
我
/
.
1
0
1
1
6
2
/
t
我
A
C
_
A
_
0
0
3
5
9
1
9
2
4
1
8
9
/
/
t
我
A
C
_
A
_
0
0
3
5
9
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
7
S
e
p
e
米
乙
e
r
2
0
2
3
dep
f-pos
c-pos
ner
phrase start
phrase end
Properties
Probing
LM-Acc
LM-DKL
氮. dir
氮. 类
Majority
Vanilla
Vanilla
Rand
Selectivity
Amnesic
Rand
Amnesic
820
41
11.44
71.19
56.98
4.67
20.46
4.67
7.77
7.77
675
45
13.22
78.32
56.98
24.69
59.51
6.01
6.10
7.26
240
12
31.76
84.40
56.98
54.55
66.49
33.28
0.45
3.36
95
19
86.09
90.68
57.71
56.88
60.35
48.39
0.10
1.39
35
2
59.25
85.53
57.71
57.46
60.97
56.89
0.02
0.06
52
2
58.51
83.21
57.71
57.27
60.80
56.19
0.04
0.13
桌子 2: Amnesic probing results for the masked representations. Properties statistics,
word-prediction accuracy and DKL results for the different properties inspected in this
工作. We report the vanilla word prediction accuracy and the Amnesic scores, 也
the Rand and 1-Hot controls which shows minimal information loss and high selectivity
(except for the dep property which all information was removed). The DKL is also reported
for all properties in the last rows which show similar trends as the accuracy performance.
2020), but measure it from the usefulness of
properties on the main task. We conclude that
high probing performance does not entail this
information is being used at a later part of the
网络.
5 What Properties are Important for the
Pre-Training Objective?
Probing studies tend to focus on representations
that are used for an end-task (usually the last
hidden layer before the classification layer). 在里面
case of MLM models, the words are not masked
when encoding them for downstream tasks.
然而, these representations are different
from those used during the pre-training LM phase
(of interest to us), where the input words are
masked. It is therefore unclear if the conclusions
drawn from conventional probing also apply to
the way that the pre-trained model operates.
From this section on, unless mentioned other-
明智的, we report our experiments on the masked
字. 那
是, given a sequence of tokens
x1, . . . , 希, . . . , xn we encode the representa-
tion of each token xi using its context, 作为
x1, . . . , xi−1, [M ASK], xi+1, . . . , xn.
如下:
The rest of the tokens remain intact. We feed
these input tokens to BERT, and only use the
masked representation of each word in its context
H(x1, . . . , xi−1, [M ASK], xi+1, . . . , xn)我.
F
乙
y
G
你
e
s
t
t
哦
n
0
7
S
e
p
e
米
乙
e
r
2
0
2
3
数字 2: LM accuracy over INLP predictions, 为了
the masked tokens version. We present both the vanilla
word-prediction score (straight, 蓝线), as well as the
控制 (orange, large circles) and INLP (绿色的, 小的
circles). Note that the number of removed dimensions
per iteration differs, based on the number of classes of
that property.
We repeat the experiments from Section 4 和
report the results in Table 2. 正如预期的那样, 这
LM accuracy drops significantly, as the model
does not have access to the original word, and it
has to infer it only based on context. 全面的, 这
165
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
t
A
C
我
/
我
A
r
t
我
C
e
–
p
d
F
/
d
哦
我
/
.
1
0
1
1
6
2
/
t
我
A
C
_
A
_
0
0
3
5
9
1
9
2
4
1
8
9
/
/
t
我
A
C
_
A
_
0
0
3
5
9
p
d
.
c-pos
Vanilla Rand Amnesic Δ
c-pos
Vanilla
Amnesic
动词
名词
adposition
determiner
numeral
punctuation
particle
conjunction
adverb
pronoun
形容词
其他
46.72
42.91
73.80
82.29
40.32
80.71
96.40
78.01
39.84
70.29
46.41
70.59
44.85
38.94
72.21
83.53
40.19
81.02
95.71
72.94
34.11
61.93
42.63
76.47
34.99
34.26
37.86
16.64
33.41
47.03
18.74
4.28
23.71
33.23
34.56
52.94
11.73
8.65
35.93
65.66
6.91
33.68
77.66
73.73
16.14
37.06
11.85
17.65
桌子 3: Masked, c-pos removal, fine-grained
LM analysis. Removing c-pos information and
testing the accuracy performance of words,
accumulating by their label. Δ is the difference
in performance between the vanilla and Amnesic
scores.
trends in the masked setting are similar to the non-
masked setting. 然而, this is not always the
案件, as we show in Section 7. We also report the
selectivity control. Notice that the performance
for this experiment was improved across all tasks.
In the case of dep and f-pos, where we had to
neutralize most of the dimensions the performance
does not fully recover. Note that the number of
classes in those experiments might be a factor
in the large performance gaps (expressed by the
number of removed dimensions, 氮. dir, 在里面
桌子). While not part of this study, 这将是
interesting to control for this factor in future work.
However for the rest of the properties (c-pos, ner,
and the phrase-markers) the performance is fully
recovered, showing our methods’ selectivity.
To further study the effect of INLP and inspect
how the different dimensions removal affect per-
formance, we display in Figure 2 the LM perfor-
mance after each iteration, both with the amnesic
probing and the control, and observe a consistent
gap between them. 而且, we highlight the
difference in the slope for our method and the
random direction removal. The amnesic probing
exemplifies a much steeper slope than the random
方向, indicating that the studied properties are
indeed correlated with words prediction. 我们也
provide the main task performance after each iter-
ation in Figure 5 in the Appendix, which steadily
decreases with each iteration.
动词
名词
adposition
determiner
numeral
punctuation
particle
conjunction
adverb
pronoun
形容词
56.98
56.98
56.98
56.98
56.98
56.98
56.98
56.98
56.98
56.98
56.98
55.60
55.79
53.40
51.04
55.88
53.12
55.26
54.29
55.64
54.97
55.95
Δ
1.38
1.19
3.58
5.94
1.10
3.86
1.72
2.69
1.34
2.02
1.03
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
t
A
C
我
/
我
A
r
t
我
C
e
–
p
d
F
/
d
哦
我
/
.
1
0
1
1
6
2
/
t
我
A
C
_
A
_
0
0
3
5
9
1
9
2
4
1
8
9
/
/
t
我
A
C
_
A
_
0
0
3
5
9
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
7
S
e
p
e
米
乙
e
r
2
0
2
3
桌子 4: Word prediction accuracy after fine-
grained tag distinction removal, masked version.
Rand control performance are all between 56.05
和 56.49 准确性 (with a maximum difference
from vanilla of 0.92 点).
6 Specific Labels and Word Prediction
In the previous sections we observed the impact
(or lack thereof) of different properties on word
prediction. But when a property affects words
prediction, are all words affected similarly? 在
this section, we inspect a more fine-grained ver-
sion of the properties of interest, and study the
impact of those on word predictions.
Fine-Grained Analysis When we remove the
POS information from the representation, 是
nouns affected to the same degree as conjunctions?
We repeat the masked experimental setting from
部分 5, but this time we inspect the word
prediction performance for the different labels.
We report the results for the c-pos tagging in
桌子 3. We observe large differences in the
word prediction performance before and after the
POS removal between the labels. Nouns, numbers,
and verbs show a relatively small impact in per-
formance (8.64, 6.91, 和 11.73 分别),
while conjunctions, particles and determiners de-
monstrate large performance drops (73.73, 77.66,
和 65.65, 分别). We see that the infor-
mation about POS labels at the word-level pre-
diction is much more important
in closed-set
vocabularies (such as conjunctions and determin-
呃) than with open vocabularies (such as nouns
和动词).
166
A manual inspection of predicted words after
removing the POS information reveals that many
of the changes are due to the transformation of
function words to content words. 例如,
the words ‘and’, ‘of’, and ‘a’ become ‘rotate’,
‘say’, and ‘final’, 分别, in the inspected
句子. For a more quantitative analysis, 我们
use a POS tagger in order to measure the POS
label confusion before and after the intervention.
Out of the 12,700 determiners conjuncions and
punctuations, 200 of the predicted words by
BERT were tagged as nouns and verbs before
the intervention, 相比 3,982 后.
Removal of Specific Labels Following the
observation that classes are affected differently
when predicting words, we further investigate the
differences of specific label removal. 为此,
we repeat the amnesic probing experiments, 但
instead of removing the fine-grained information
of a linguistic property, we make a cruder re-
moval: The distinction between a specific label
和其余的. 例如, with POS as the
general property, we now investigate whether the
information of noun vs. the rest is important for
predicting a word. We perform this experiment
for all of the pos-c labels, and report the results in
桌子 4.10
We observe big performance gaps when re-
moving different labels. 例如, removing
the distinctions between nouns and the rest, 或者
verbs and the rest has minimal impact on per-
formance. 另一方面, determiners and
punctuations are highly affected. This is consis-
tent with the previous observation on removing
specific information. These results call for more
detailed observations and experiments when
studying a phenomenon as the fine-grained prop-
erty distinction does not behave the same across
labels.11
7 Behavior Across Layers
The results up to this section treat all of BERT’s
‘Transformer blocks’ (Vaswani et al., 2017) 作为
the encoding function and the embedding matrix
10In order to properly compare the different properties, 我们
run INLP for solely 60 迭代, for each property. 自从
‘other’ tag is not common, we omit it from this experiment.
11We repeat these experiments with the other studied
properties and observe similar trends.
as the model. But what happens when we remove
the information of some linguistic property from
earlier layers?
By using INLP to remove a property from
an intermediate layer, we prevent the subsequent
layer from using linearly present information ori-
ginally stored in that layer. Though this operation
does not erase all
the information correlative
with the studied property (as INLP only removes
linearly present information), it makes it harder
for the model to use this information. Concretely,
we begin by extracting the representation of some
text from the first k layers of BERT and then
run INLP on these representations to remove the
property of interest. Given that we wish to study
the effect of a property on layer i, we project the
representation using the corresponding projection
matrix Pi that was learned on those representation,
and then continue the encoding of the following
layers.12
7.1 Property Recovery After an Amnesic
Operation
Is the property we linearly remove from a given
layer recoverable by subsequent layers? We re-
move the information about some linguistic pro-
perty from layer i, and learn a probe classifier
on all subsequent layers i + 1, . . . , n. This tests
how much information about this property the
following layers have recovered. We experiment
that could be removed
with the properties
without reducing too many dimensions: pos-c,
ner, phrase start, and phrase end. These results are
summarized in Figure 3, both for the non-masked
version (upper row) and the masked version (降低
排).
尤其, for the pos-c, non-masked version, 这
information is highly recoverable in subsequent
layers when applying the amnesic operation on
the first seven layers: the performance drops from
the regular probing of that layer between 5.72 和
12.69 accuracy points. 然而, in the second
part of the network, the drop is substantially larger:
之间 16.57 和 46.39 accuracy points. 为了
masked version, we witness an opposite trend: 这
pos-c information is much less recoverable in the
lower parts of the network than the upper parts. 在
特别的, the removal of pos-c from the second
12As the representations used to train INLP do not include
BERTs’ special tokens (例如, ‘CLS’, ‘SEP’), we also don’t
use the projection matrix on those tokens.
167
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
t
A
C
我
/
我
A
r
t
我
C
e
–
p
d
F
/
d
哦
我
/
.
1
0
1
1
6
2
/
t
我
A
C
_
A
_
0
0
3
5
9
1
9
2
4
1
8
9
/
/
t
我
A
C
_
A
_
0
0
3
5
9
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
7
S
e
p
e
米
乙
e
r
2
0
2
3
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
t
A
C
我
/
我
A
r
t
我
C
e
–
p
d
F
/
d
哦
我
/
.
1
0
1
1
6
2
/
t
我
A
C
_
A
_
0
0
3
5
9
1
9
2
4
1
8
9
/
/
t
我
A
C
_
A
_
0
0
3
5
9
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
7
S
e
p
e
米
乙
e
r
2
0
2
3
数字 3: Layer-wise removal. Removing from layer i (the rows) and testing probing performance on layer j (这
columns). Top row (3A) is non-masked version, bottom row (3乙) is masked.
layer appears to affect the rest of the layers, 哪个
do not manage to recover a high score on this task,
范围从 32.7 到 42.1 准确性.
For all of the non-masked experiments the upper
layers seem to make it harder for the subsequent
layers to extract
the property. In the masked
version however, there is no consistent trend.
It is harder to extract properties after the lower
parts for pos-c and ner. For phrase start the upper
part makes it harder for further extraction and
for phrase end both the lower and upper parts
make it harder, as opposed to the middle layers.
Further research is needed in order to understand
the significance of those findings, and whether or
not they are related to information usage across
layers.
This lead us to the final experiment where we
test for the main task performance after an amnesic
operation at the intermediate layers.
7.2 Re-rediscovering the NLP Pipeline
In the previous set of experiments, we measured
how much of the signal removed in layer i is
recovered in subsequent layers. We now study
how the removal of information in layer i affects
the word prediction accuracy at the final layer,
in order to get a complementary measure for
layer importance with respect to a property. 这
results for the different properties are presented
图中 4, where we plot the difference in word
prediction performance between the control and
the amnesic probing when removing a linguistic
property from a certain layer.
These results provide a clear interpretation
on the internal function of BERT’s layers. 为了
the masked version (数字 4), we observe that
the pos-c properties are mostly important
在
层 3 and its surrounding layers, 也
层 12. 然而, this information is accurately
extractable only towards the last layers. For ner,
we observe that the main performance loss occurs
at layer 4. For phrase-markers the middle layers
很重要: layers 5 和 7 for phrase start
(although the absolute performance loss is not
大的) and layer 6 for phrase end contributes the
most for the word prediction performance.
The story with the non-masked version is quite
不同的 (数字 4). 第一的, notice that the amnesic
operation improves the LM performance for all
特性, in some layers.13 Second, the drop in
performance peak across all properties is different
than the masked version experiments. Particularly,
it seems that for pos-c, when the words are non-
masked in the input, the most important layer
for pos-c is 11 (并不是
层 3, 如在
masked version), while this information is easily
extractable (by standard probing) across all layers
(多于 80% 准确性).
有趣的是, the conclusions we draw on layer-
importance from amnesic probing partly differ
13Giulianelli et al. (2018) observed a similar behavior by
performing an intervention on LSTM activations.
168
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
t
A
C
我
/
我
A
r
t
我
C
e
–
p
d
F
/
d
哦
我
/
.
1
0
1
1
6
2
/
t
我
A
C
_
A
_
0
0
3
5
9
1
9
2
4
1
8
9
/
/
t
我
A
C
_
A
_
0
0
3
5
9
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
7
S
e
p
e
米
乙
e
r
2
0
2
3
数字 4: The influence of the different properties, from each layer on LM predictions. Top figure (4A) shows the
results on the regular, non-masked version, bottom figure (4乙) for the masked version. Colors allow ease of layer
comparison across graphs.
最后
trends are similar:
from the ones in the ‘‘Pipeline processing’’ hy-
pothesis (Tenney et al., 2019A), which aims to
localize and attribute information processing of
linguistic properties to parts of BERT (为了
non-masked version).14 On one hand the ner
实验
layers
are much more important than earlier ones (在
特别的, 层 11 is the most affected in our
案件, with a decrease of 31.09 accuracy points.
另一方面, in contrast to their hypotheses,
we find that POS information, pos-c (这是
considered to be more important in the earlier
layers) affects the word prediction performance
much more in the upper layers (40.99 准确性
loss in the 11th layer). 最后, we note that our
approach performs an ablation of these properties
in the representation space, which reveals which
layers are actually responsible for processing
特性, as opposed to Tenney et al. (2019A),
who focused on where this information is easily
extractable.
clearer distinctions between the two. 最后, 我们
stress that the different experiments should not be
compared between one setting to the other, 和
thus the different y-scales in the figures. 这是
due to confounding variables (例如, 号码
of removed dimensions from the representations),
which we do not control for in this work.
8 相关工作
With the established impressive performance of
large pre-trained language models (Devlin et al.,
2019; 刘等人。, 2019乙), based on the Transformer
建筑学 (Vaswani et al., 2017), a large body of
work started studying and gaining insight into how
these models work and what do they encode.15 For
a thorough summary of these advancements we
refer the reader to a recent primer on the subject
(Rogers et al., 2020).
We note the big differences in behavior when
analyzing the masked vs. the non-masked version
of BERT, and call for future work to make a
14We note that this work analyzes BERT-base, 相比之下
to Tenney et al. (2019A) who analyzed BERT-Large.
15These works cover a wide variety of topics, 包括
grammatical generalization (Goldberg, 2019; Warstadt et al.,
2019), syntax (Tenney et al., 2019乙; 林等人。, 2019; Reif
等人。, 2019; Hewitt and Manning, 2019; 刘等人。, 2019A),
world knowledge (Petroni et al., 2019; Jiang et al., 2020),
推理 (Talmor et al., 2019), and common sense (Forbes
等人。, 2019; Zhou et al., 2019; Weir et al., 2020).
169
A particularly popular and easy-to-use inter-
pretation method is probing (Conneau et al.,
2018). Despite its popularity, recent works have
questioned the use of probing as an interpretation
tool. Hewitt and Liang (2019) have emphasized
the need to distinguish between decoding and
learning the probing tasks. They introduced con-
trol tasks, a consistent but linguistically mea-
ningless attribution of labels to tokens, 并有
shown that probes trained on the control tasks
often perform well, due to the strong lexical infor-
mation held in the representations and learned by
the probe. This leads them to propose a selectivity
measure that aims to choose probes which achieve
high accuracy only on linguistically-meaningful
任务. Tamkin et al. (2020) claim that probing
cannot serve as an explanation of downstream
task success. They observe that the probing scores
do not correlate with the transfer scores achieved
by fine-tuning.
最后, Ravichander et al. (2020) 显示
probing can achieve non-trivial results for linguis-
tic properties that were not needed for the task the
model was trained on. 在这项工作中, we observe a
similar phenomenon, but from a different angle.
We actively remove some property of interest
from the queried representation, and measure
the impact of the amnesic representation of the
property on the main task.
Two recent works study the probing paradigm
from an information-theory perspective. Pimentel
等人. (2020) emphasize that under a mutual-
information maximization objective, ‘‘better’’ probes
are increasingly more accurate, 不管
复杂. They use the data-processing in-
equality to question the rationale behind methods
that focus on encoding, and propose ease of ex-
tractability as an alternative criterion. Voita and
Titov (2020) follow this direction, using the
concept of minimum description length (MDL,
Rissanen, 1978) to quantify the total information
needed to transmit both the probing model and
the labels it predicts. Our discussion here is
somewhat orthogonal to those on the meaning
of encoding and probe complexity, as we focus on
the information influence on the model’s behavior,
rather than on the ability to extract it from the
表示.
Finally and concurrent to this work, Feder et al.
(2020) have studied a similar question of a causal
attribution of concepts to representations, 使用
adversarial training guided by causal graphs.
9 讨论
(selectivity). 这
直观地, we would like to completely neutralize
the abstract property we are interested in—e.g.,
POS information (completeness), as represented
by the model—while keeping the rest of the
representation intact
是一个
nontrivial goal, as it is not clear whether neural
models actually have abstract and disentangled
representations of properties such as POS, 哪个
are independent of other properties of the text.
It may be the case that the representation of
many properties is intertwined. 的确, 有
an ongoing debate on the assertion that certain
information is ‘‘encoded’’ in the representation
(Voita and Titov, 2020; Pimentel et al., 2020).
然而, even if a disentangled representation of
the information we focus on exists, it is not clear
how to detect it.
We implement the information removal oper-
ation with INLP, which gives a first order approx-
imation using linear classifiers; we note, 然而,
that one can in principle use other approaches to
achieve the same goal. While we show that we do
remove the linear ability to predict the properties
and provide some evidence to the selectivity of
this method (§2), one has to bear in mind that we
remove only linearly-present information, 然后
the classifiers can rely on arbitrary features that
happen to correlate with the gold label, be it a result
of spurious correlations or inherent encoding of the
direct property. 的确, we observe this behavior
in Section 7.1 (数字 3), where we neutralize the
information from certain layers, but occasionally
observe higher probing accuracy in following
layers. We thus stress that the information we
remove in practice should be seen only as an
approximation for the abstract information we are
interested in, and that one has to be cautious of
causal interpretations of the results. Although in
this paper we use the INLP algorithm in order to
remove linear information, amnesic probing is not
restricted to removing linear information. 什么时候
non-linear removal methods become available,
they can be swapped instead of INLP. 这
stresses the importance of creating algorithms for
non-linear information removal.
Another unanswered question is how to quan-
tify the relative importance of different prop-
erties encoded in the representation for the word
prediction task. The different erasure portion
for different properties makes it hard to draw
170
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
t
A
C
我
/
我
A
r
t
我
C
e
–
p
d
F
/
d
哦
我
/
.
1
0
1
1
6
2
/
t
我
A
C
_
A
_
0
0
3
5
9
1
9
2
4
1
8
9
/
/
t
我
A
C
_
A
_
0
0
3
5
9
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
7
S
e
p
e
米
乙
e
r
2
0
2
3
conclusions on which property is more important
for the task of interest. Although we do not
make claims such as ‘‘dependency information is
more important than POS’’, these are interesting
questions that should be further discussed and
researched.
10 结论
在这项工作中, we propose a new method, Amnesic
Probing, which aims to quantify the influence of
specific properties on a model that is trained on a
task of interest. We demonstrate that conventional
probing falls short in answering such behavioral
问题, and perform a series of experiments
on different linguistic phenomenon, quantifying
their influence on the masked language modeling
任务. 此外, we inspect both unmasked
and masked BERT’s representation and detail
the differences between them, which we find to
be substantial. We also highlight the different
influence of specific fine-grained properties (例如,
nouns and determiners) on the final task. 最后,
we use our proposed method on the different
layers of BERT, and study which parts of the
model make use of
the different properties.
合在一起, we argue that compared with
probing, counterfactual intervention—such as the
one we present here—can provide a richer and
more refined view of the way symbolic linguistic
information is encoded and used by neural models
with distributed representations.16
致谢
到
喜欢
thank Hila Gonen,
我们会
Amit Moryossef, Divyansh Kaushik, Abhilasha
Ravichander, Uri Shalit, Felix Kreuk, Jurica ˇSeva,
and Yonatan Belinkov for their helpful comments
and discussions. We also thank the anonymous
reviewers and the action editor, Radu Florian, 为了
their valuable suggestions.
This project has received funding from the
Europoean Research Council (ERC) under the
Europoean Union’s Horizon 2020 research and
innovation programme, grant agreement no.
802774 (iEXTRACT). Yanai Elazar is grateful
to be partially supported by the PBC fellowship
for outstanding PhD candidates in Data Science.
16All of the experiments were logged and tracked using
Weights and Biases (Biewald, 2020).
171
参考
Yossi Adi, Einat Kermany, Yonatan Belinkov,
Ofer Lavi, and Yoav Goldberg. 2016. Fine-
grained analysis of sentence embeddings using
auxiliary prediction tasks. CoRR, abs/1608
.04207.
Christoph Alt, Aleksandra Gabryszak,
和
Leonhard Hennig. 2020. Probing linguistic fea-
tures of sentence-level representations in rela-
tion extraction. 在诉讼程序中
the 58th
Annual Meeting of the Association for Compu-
tational Linguistics, pages 1534–1545, 在线的.
计算语言学协会.
Lukas Biewald. 2020. Experiment tracking with
weights and biases. Software available from
wandb.com.
Alexis Conneau, Germ´an Kruszewski, Guillaume
Lample, Lo¨ıc Barrault, and Marco Baroni.
2018. What you can cram into a single $&!#*
向量: Probing sentence embeddings for lin-
guistic properties. In Proceedings of the 56th
Annual Meeting of the Association for Compu-
tational Linguistics (体积 1: Long Papers),
pages 2126–2136. DOI: https://doi.org
/10.18653/v1/P18-1198
Jacob Devlin, Ming-Wei Chang, Kenton Lee,
and Kristina Toutanova. 2019. BERT: 预-
training of deep bidirectional transformers for
在诉讼程序中
language understanding.
这 2019 Conference of the North American
Chapter of the Association for Computational
语言学: 人类语言技术,
NAACL-HLT 2019, 明尼阿波利斯, 明尼苏达州, 美国,
June 2–7, 2019, 体积 1 (Long and Short
文件), pages 4171–4186.
Yanai Elazar and Yoav Goldberg. 2018. Adversar-
ial removal of demographic attributes from text
数据. 在诉讼程序中 2018 会议
on Empirical Methods in Natural Language
加工, pages 11–21. Association for Com-
putational Linguistics. DOI: https://土井
.org/10.18653/v1/D18-1002
Amir Feder, Nadav Oved, Uri Shalit, 和
Roi Reichart. 2020. Causalm: Causal model
explanation through counterfactual
语言
型号.
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
t
A
C
我
/
我
A
r
t
我
C
e
–
p
d
F
/
d
哦
我
/
.
1
0
1
1
6
2
/
t
我
A
C
_
A
_
0
0
3
5
9
1
9
2
4
1
8
9
/
/
t
我
A
C
_
A
_
0
0
3
5
9
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
7
S
e
p
e
米
乙
e
r
2
0
2
3
Maxwell Forbes, Ari Holtzman, and Yejin Choi.
2019. Do neural language representations learn
physical commonsense? 会议记录
这
41st Annual Conference of
the Cognitive
科学社.
Yaroslav Ganin, Evgeniya Ustinova, Hana
Ajakan, Pascal Germain, Hugo Larochelle,
Franc¸ois Laviolette, Mario Marchand, 和
Victor Lempitsky. 2016. Domain-adversarial
training of neural networks. Journal of Machine
Learning Research, 17(1):2096–2030.
Mario Giulianelli, Jack Harding, Florian Mohnert,
Dieuwke Hupkes, and Willem Zuidema. 2018.
Under the hood: Using diagnostic classifiers
to investigate and improve how language mo-
dels track agreement information. In Proceed-
ings of the 2018 EMNLP Workshop Blackbox
自然语言处理: Analyzing and Interpreting Neural Net-
works for NLP, pages 240–248. DOI: https://
doi.org/10.18653/v1/W18-5426
Yoav Goldberg. 2019. Assessing bert’s syntactic
能力. arXiv 预印本 arXiv:1901.05287.
Yash Goyal, Uri Shalit, and Been Kim. 2019.
Explaining classifiers with causal concept effect
(cace). arXiv 预印本 arXiv:1907.07165.
John Hewitt and Percy Liang. 2019. Designing and
interpreting probes with control tasks. In Em-
pirical Methods in Natural Language Pro-
cessing (EMNLP). DOI: https://doi.org
/10.18653/v1/D19-1275
John Hewitt and Christopher D. 曼宁. 2019.
A structural probe for finding syntax in word
陈述. In Proceedings of the Con-
ference of
the North American Chapter of
the Association for Computational Linguistics:
人类语言技术, NAACL-HLT,
pages 4129–4138.
Dieuwke Hupkes, Sara Veldhoen, and Willem
Zuidema. 2018. Visualisation and’diagnostic
classifiers’ reveal how recurrent and recursive
neural networks process hierarchical structure.
Journal of Artificial Intelligence Research,
61:907–926. DOI: https://doi.org/10
.1613/jair.1.11196
Zhengbao Jiang, Frank F Xu, Jun Araki, 和
Graham Neubig. 2020. How can we know what
language models know? Transactions of the
计算语言学协会,
8:423–438. DOI: https://doi.org/10
.1162/tacl a 00324
Divyansh Kaushik, Eduard Hovy, and Zachary
Lipton. 2020. Learning the difference that
makes a difference with counterfactually-
augmented data. In International Conference
on Learning Representations.
Yongjie Lin, Yi Chern Tan, and Robert Frank.
2019. Open sesame: Getting inside berts lin-
这
guistic knowledge.
2019 ACL Workshop BlackboxNLP: Analyzing
and Interpreting Neural Networks for NLP,
pages 241–253.
在诉讼程序中
Nelson F. 刘, Matt Gardner, Yonatan Belinkov,
Matthew E. Peters, and Noah A. 史密斯. 2019A.
Linguistic knowledge and transferability of
contextual representations. 在诉讼程序中
这 2019 Conference of the North American
Chapter of
the Association for Computa-
tional Linguistics: Human Language Techno-
logies, 体积 1 (Long and Short Papers),
pages 1073–1094.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei
Du, Mandar Joshi, Danqi Chen, Omer Levy,
Mike Lewis, Luke Zettlemoyer, and Veselin
Stoyanov. 2019乙. Roberta: A robustly opti-
mized bert pretraining approach. arXiv 预印本
arXiv:1907.11692.
Ryan McDonald,
乔金·尼弗尔, Yvonne
Quirmbach-Brundage, Yoav Goldberg, Dipanjan
这, Kuzman Ganchev, Keith Hall, Slav
Petrov, Hao Zhang, Oscar T¨ackstr¨om, 等人.
2013. Universal dependency annotation for
这
multilingual parsing. 在诉讼程序中
51st Annual Meeting of the Association for
计算语言学 (体积 2: Short
文件), pages 92–97.
Judea Pearl and Dana Mackenzie. 2018. 这本书
of why: the new science of cause and effect.
基础书籍.
Fabian Pedregosa, Ga¨el Varoquaux, Alexandre
Gramfort, Vincent Michel, Bertrand Thirion,
彼得
Olivier Grisel, Mathieu Blondel,
Prettenhofer, Ron Weiss, and Vincent Dubourg.
2011. Scikit-learn: Machine learning in python.
172
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
t
A
C
我
/
我
A
r
t
我
C
e
–
p
d
F
/
d
哦
我
/
.
1
0
1
1
6
2
/
t
我
A
C
_
A
_
0
0
3
5
9
1
9
2
4
1
8
9
/
/
t
我
A
C
_
A
_
0
0
3
5
9
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
7
S
e
p
e
米
乙
e
r
2
0
2
3
Journal of Machine Learning Research, 12(Oct):
2825–2830.
Matthew E. Peters, Mark Neumann, Mohit Iyyer,
Matt Gardner, Christopher Clark, Kenton Lee,
and Luke Zettlemoyer. 2018. Deep contex-
In Proceed-
tualized word representations.
ings of NAACL-HLT, pages 2227–2237. DOI:
https://doi.org/10.18653/v1/N18
-1202
Fabio Petroni, Tim Rockt¨aschel, Sebastian Riedel,
Patrick Lewis, Anton Bakhtin, Yuxiang Wu,
and Alexander Miller. 2019. Language models
as knowledge bases? 在诉讼程序中
这 2019 经验方法会议
in Natural Language Processing and the 9th
International Joint Conference on Natural
语言
(EMNLP-IJCNLP),
pages 2463–2473. DOI: https://doi.org
/10.18653/v1/D19-1250
加工
Tiago Pimentel, Josef Valvoda, Rowan Hall
Maudslay, Ran Zmigrod, Adina Williams, 和
Ryan Cotterell. 2020.
Information-theoretic
probing for linguistic structure. DOI: https://
doi.org/10.18653/v1/2020.acl-main
.420
Shauli Ravfogel, Yanai Elazar, Hila Gonen,
Michael Twiton, and Yoav Goldberg. 2020.
Null it out: Guarding protected attributes by
iterative nullspace projection. 在诉讼程序中
the 58th Annual Meeting of the Association for
计算语言学, pages 7237–7256,
在线的. Association for Computational Linguis-
抽动症. DOI: https://doi.org/10.18653
/v1/2020.acl-main.647
Abhilasha Ravichander, Yonatan Belinkov, 和
Eduard Hovy. 2020. Probing the probing
范例: Does probing accuracy entail task
关联? arXiv 预印本 arXiv:2005.00719.
Emily Reif, Ann Yuan, Martin Wattenberg,
Fernanda B. Viegas, Andy Coenen, 亚当
Pearce, and Been Kim. 2019. Visualizing and
measuring the geometry of bert. In Advances
in Neural Information Processing Systems,
pages 8592–8600.
Jorma Rissanen. 1978. Modeling by shortest data
description. Automatica, 14(5):465–471. DOI:
https://doi.org/10.1016/0005-1098
(78)90005-5
Anna Rogers, Olga Kovaleva,
and Anna
Rumshisky. 2020. A primer in bertology: 什么
we know about how Bert works. arXiv 预印本
arXiv:2002.12327. DOI: https://doi.org
/10.1162/tacl a 00349
Alon Talmor, Yanai Elazar, Yoav Goldberg, 和
Jonathan Berant. 2019. olmpics – on what
language model pre-training captures. DOI:
https://doi.org/10.1162/tacl
00342
Alex Tamkin, Trisha Singh, Davide Giovanardi,
调查中
and Noah Goodman. 2020.
transferability in pretrained language models.
arXiv
arXiv:2004.14975. DOI:
https://doi.org/10.18653/v1/2020
.findings-emnlp.125
preprint
Ian Tenney, Dipanjan Das, and Ellie Pavlick.
2019A. BERT rediscovers the classical NLP
pipeline. In Proceedings of the Conference of
the Association for Computational Linguistics,
前交叉韧带, pages 4593–4601. DOI: https://土井
.org/10.18653/v1/P19-1452
Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang,
Adam Poliak, 右. Thomas McCoy, Najoung
Kim, Benjamin Van Durme, Sam Bowman,
Dipanjan Das, and Ellie Pavlick. 2019乙.
What do you learn from context? probing
for sentence structure in contextualized word
陈述. In International Conference on
Learning Representations.
Ashish Vaswani, Noam Shazeer, Niki Parmar,
Jakob Uszkoreit, Llion Jones, Aidan N.
Gomez, Łukasz Kaiser, and Illia Polosukhin.
2017. Attention is all you need. In Advances
in neural
信息处理系统,
pages 5998–6008.
Jesse Vig, Sebastian Gehrmann, Yonatan
Belinkov, Sharon Qian, Daniel Nevo, Yaron
歌手, and Stuart Shieber. 2020. Causal
mediation analysis for interpreting neural NLP:
The case of gender bias. arXiv 预印本
arXiv:2004.12265.
Elena Voita and Ivan Titov. 2020. 信息-
theoretic probing with minimum description
length. arXiv 预印本 arXiv:2003.12298. DOI:
https://doi.org/10.18653/v1/2020
.emnlp-main.14
173
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
t
A
C
我
/
我
A
r
t
我
C
e
–
p
d
F
/
d
哦
我
/
.
1
0
1
1
6
2
/
t
我
A
C
_
A
_
0
0
3
5
9
1
9
2
4
1
8
9
/
/
t
我
A
C
_
A
_
0
0
3
5
9
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
7
S
e
p
e
米
乙
e
r
2
0
2
3
Alex Warstadt, Yu Cao, Ioana Grosu, Wei Peng,
Hagen Blix, Yining Nie, Anna Alsop, Shikha
Bordia, Haokun Liu, Alicia Parrish, Sheng-Fu
王, Jason Phang, Anhad Mohananey, Phu
Mon Htut, Paloma Jeretiˇc, and Samuel R.
Bowman. 2019. Investigating BERTs knowl-
edge of language: Five analysis methods with
NPIs. 在诉讼程序中 2019 会议
on Empirical Methods in Natural Language
Processing and the 9th International Joint
Conference on Natural Language Processing
(EMNLP-IJCNLP), pages 2870–2880. DOI:
https://doi.org/10.18653/v1/D19
-1286
Nathaniel Weir, Adam Poliak, and Benjamin
Van Durme. 2020. Probing neural language
models for human tacit assumptions. In 42nd
Annual Virtual Meeting of theCognitive Science
社会 (CogSci).
Ralph Weischedel, Martha Palmer, 米切尔
马库斯, Eduard Hovy, Sameer Pradhan, Lance
拉姆肖, Nianwen Xue, Ann Taylor, Jeff
考夫曼, Michelle Franchini, and Mohammed
El-Bachouti, Robert Belvin, and Ann Houston.
发布 5.0 ldc2013t19.
2013. Ontonotes
Linguistic Data Consortium, 费城,
PA, 23.
Thomas Wolf, Lysandre Debut, Victor Sanh,
Julien Chaumond, Clement Delangue, Anthony
Moi, Pierric Cistac, Tim Rault, R’emi Louf,
Morgan Funtowicz, and Jamie Brew. 2019.
Huggingface’s transformers: State-of-the-art
natural language processing. ArXiv, abs/1910
.03771.
Xuhui Zhou, Yue Zhang, Leyang Cui, and Dandan
黄. 2019. Evaluating commonsense in
pre-trained language models. arXiv 预印本
arXiv:1911.11931.
附录A
We provide additional experiments that depict
the performance of the main task (例如, 销售点)
performance during the INLP iterations
在
数字 5.
数字 5: LM accuracy over INLP predictions, 为了
masked tokens version. We present both the Vanilla
word-prediction score (straight, 蓝线), 也
Amnesic Probing (orange, small circles), and the main
task performance (红色的, large circles). For reference we
also provide the vanilla probing performance of each
任务 (绿色的, cross marks). Note that the number of
removed dimensions per iteration differs, based on the
number of classes of that property.
Appendix B Hewitt and Liang’s
Control Task
Control Task (Hewitt and Liang, 2019) 已经
suggested as a way to attribute the performance
of the probe to extraction of encoded information,
as opposed to lexical memorization. Our goal in
this work, 然而, is not to extract information
from the representation (as is done in conventional
probing) but to measure a behavioural outcome.
Since the control
task is solved by lexical
memorization, applying INLP on control task’s
classifiers erases lexical information (IE。, erases
the ability to distinguish between arbitrary words),
which is at the core of the LM objective and
which is highly correlated with many of the other
linguistic properties, such as POS. We argue that
even if we do see a significant drop in performance
with the control
this says little on the
validity of the results of removal of the linguistic
财产 (例如, 销售点). 然而, for completeness,
we provide the results in Figure 6. As can be seen
from this figure, this control’s slope is smaller than
任务,
174
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
t
A
C
我
/
我
A
r
t
我
C
e
–
p
d
F
/
d
哦
我
/
.
1
0
1
1
6
2
/
t
我
A
C
_
A
_
0
0
3
5
9
1
9
2
4
1
8
9
/
/
t
我
A
C
_
A
_
0
0
3
5
9
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
7
S
e
p
e
米
乙
e
r
2
0
2
3
the one of the amnesic probing, suggesting that
those directions have less behavioral influence.
然而, the slopes are steeper than the ‘Rand’
实验. This is due to the identity removal
of groups of words, due to the label shuffle, 作为
suggested in their setup. This is the reason we
believe this test is not adequate in our case, 和
why we provide other tests to control for our
方法: Rand and Selectivity (§2.3).
数字 6: LM accuracy over INLP predictions, 为了
masked tokens version. We present both the Vanilla
word-prediction score (straight, 蓝线), 还有
as Amnesic Probing (orange, small circles), 和
control performance (orange, large circles). 我们也
provide the Control results for selectivity test, proposed
by Hewitt and Liang (2019) (红色的, crosses). 注意
the number of removed dimensions per iteration differs,
based on the number of classes of that property.
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
t
A
C
我
/
我
A
r
t
我
C
e
–
p
d
F
/
d
哦
我
/
.
1
0
1
1
6
2
/
t
我
A
C
_
A
_
0
0
3
5
9
1
9
2
4
1
8
9
/
/
t
我
A
C
_
A
_
0
0
3
5
9
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
7
S
e
p
e
米
乙
e
r
2
0
2
3
175