Efficient Contextual Representation Learning
With Continuous Outputs
Liunian Harold Li†, Patrick H. Chen∗, Cho-Jui Hsieh∗, Kai-Wei Chang∗
†Peking University
∗University of California, 天使们
liliunian@pku.edu.cn, patrickchen@g.ucla.edu
{chohsieh, kwchang}@cs.ucla.edu
抽象的
Contextual representation models have achieved
great success in improving various downstream
natural language processing tasks. 然而,
these language-model-based encoders are dif-
ficult to train due to their large parameter
size and high computational complexity. 经过
carefully examining the training procedure,
we observe that the softmax layer, which pre-
dicts a distribution of the target word, often in-
duces significant overhead, especially when
the vocabulary size is large. 所以, we re-
visit the design of the output layer and consider
directly predicting the pre-trained embedding
of the target word for a given context. 什么时候
applied to ELMo, the proposed approach achieves
a 4-fold speedup and eliminates 80% trainable
parameters while achieving competitive per-
formance on downstream tasks. Further anal-
ysis shows that the approach maintains the
speed advantage under various settings, 甚至
when the sentence encoder is scaled up.
1 介绍
最近几年, text representation learning ap-
proaches, such as ELMo (Peters et al., 2018A),
GPT (Radford et al., 2018), BERT (Devlin et al.,
2019), and GPT-2 (Radford et al., 2019), 有
been developed to represent generic contextual
information in natural languages by training an
encoder with a language model objective on
a large unlabelled corpus. During the training
过程, the encoder is given part of the text
and asked to predict the missing pieces. 事先的
studies show that the encoders trained in this way
can capture generic contextual information of the
input text and improve a variety of downstream
tasks significantly.
然而, training contextual representations
is known to be a resource-hungry process. 为了
例子, ELMo is reported to take about 2
weeks to train on a one-billion-token corpus
with a vocabulary of 800,000 words using three
GPUs.1 This slow training procedure hinders the
development cycle, prevents fine-grained param-
eter tuning, and makes training contextual repre-
sentations inaccessible to the broader community.
Recent work also raises concerns about the envi-
ronmental
implications of training such large
型号 (Strubell et al., 2019). 此外, the suc-
cess of these models stems from a large amount of
data they used. It is challenging, 如果不是不可能的话,
to train a contextual representation model on a
larger corpus with tens or hundreds of billions of
代币.
在这项工作中, we explore how to accelerate
contextual representation learning. We identify the
softmax layer as the primary cause of inefficiency.
This component takes up a considerable portion
全部的
trainable parameters (80% for ELMo)
and consumes a huge amount of training time.
然而, it is often not needed in the final model
as the goal of contextual representation learning
is to build a generic encoder. 所以, 这是
rather a waste to allocate extensive computational
resources to the softmax layer.
Inspired by Kumar and Tsvetkov (2019), 我们骗-
sider learning contextual representation models
with continuous outputs. In the training process,
the contextual encoder is learned by minimizing
the distance between its output and a pre-trained
target word embedding. The constant time com-
plexity and small memory footprint of the output
layer perfectly serve our desire to decouple learn-
ing contexts and words and devote most com-
putational resources to the contextual encoder. 在
添加, we combine the approach with open-
vocabulary word embeddings such that the model
can be trained without the need to pre-define a
1https://github.com/allenai/bilm-tf/
issues/55.
611
计算语言学协会会刊, 卷. 7, PP. 611–624, 2019. https://doi.org/10.1162/tacl 00289
动作编辑器: Luke Zettlemoyer. 提交批次: 1/2019; 修改批次: 6/2019; 已发表 9/2019.
C(西德:13) 2019 计算语言学协会. 根据 CC-BY 分发 4.0 执照.
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
t
A
C
我
/
我
A
r
t
我
C
e
–
p
d
F
/
d
哦
我
/
.
1
0
1
1
6
2
/
t
我
A
C
_
A
_
0
0
2
8
9
1
9
2
3
6
5
6
/
/
t
我
A
C
_
A
_
0
0
2
8
9
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
closed word set as the vocabulary. We also provide
an alternative interpretation of learning contextual
encoders with continuous outputs that sheds light
on how the pre-trained embedding could affect the
performance of the model.
We conduct a comprehensive empirical study
to analyze the proposed approach and several
existing methods that are originally proposed to
reduce the complexity of the output layer in lan-
guage models, such as the adaptive softmax, 和
the sub-word methods. We incorporate these ap-
proaches with ELMo and conduct a comprehen-
sive study to compare them in terms of training
speed and performance on five downstream tasks.
We demonstrate that the proposed approach ef-
fectively reduces the training time and trainable
parameters while maintaining competitive perfor-
mance compared with the baselines. Our approach
also exhibits consistent computational advanxtage
under different conditions (例如, with different vo-
cabulary sizes, with different sentence encoders,
and with different number of GPUs).
Source code is available at https://github.
com/uclanlp/ELMO-C.
2 Background and Related Work
Contextual representation We review contex-
tual representation models from two aspects:
how they are trained and how they are used in
下游任务.
CoVe (McCann et al., 2017) uses the source lan-
guage encoder from a machine translation model
as a contextual representation model. Peters et al.
(2018A) advocate for the use of larger unlabelled
corpora and proposes ELMo, a forward and a back-
ward LSTM-based (Hochreiter and Schmidhuber,
1997) language model, whereas GPT (雷德福
等人。, 2018) and GPT-2 (Radford et al., 2019) 建造
a language model with the Transformer (Vaswani
等人。, 2017). BERT (Devlin et al., 2019) intro-
duces the masked language model and provides
deep bidirectional representation.
There are two existing strategies for applying
pre-trained contextual representations to down-
stream tasks: 1) feature-based and 2) fine-tuning.
In the feature-based approach, fixed features
are extracted from the contextual encoder (例如,
ELMo, CoVe) and inserted as an input into a
task-specific model. In the fine-tuning approach,
the contextual encoder is designed as a part of
the network architecture for downstream tasks,
and its parameters are fine-tuned with the down-
stream task. BERT is designed for the fine-tuning
approach but it is also evaluated with the feature-
based approach. GPT-2 is a scaled-up version
of GPT and exhibits strong performance under
zero-shot settings.
Speeding up language models training Con-
siderable efforts have been devoted to accelerat-
ing the training process of language models. 一
line of research focuses on developing faster
sequence encoder architectures such as CNN
(Kim et al., 2016; Dauphin et al., 2017), QRNN
(Bradbury et al., 2016), SRU (雷等人。, 2018),
and the Transformer (Vaswani et al., 2017).
These architectures have been extensively used
for learning language representations (雷德福
等人。, 2018; Devlin et al., 2019; Tang et al.,
2018). Another line of work focuses on the large-
vocabulary issue, as a large and ever-growing vo-
cabulary results in an intractable softmax layer.
Our work falls into the second line and we review
existing solutions in detail.
Several studies for language modeling focus
on directly reducing the complexity of the soft-
max layer. Following Kumar and Tsvetkov (2019),
we group them into two categories: sampling-
based approximations and structural approxima-
系统蒸发散. Sampling-based approximations include the
sampled softmax (Bengio et al., 2003) and NCE
(Mnih and Teh, 2012). The sampled softmax ap-
proximates the normalization term of softmax by
sampling a subset of negative targets, and NCE
replaces the softmax with a binary classifier. 在
另一方面, structural approximations such as
the hierarchical softmax (Morin and Bengio, 2005)
and the adaptive softmax (Grave et al., 2016), 形式
a structural hierarchy to avoid expensive nor-
malization. The adaptive softmax, 尤其,
groups words in the vocabulary into either a short-
list or clusters of rare words. For frequent words,
a softmax over the short-list would suffice, 哪个
reduces computation and memory usage signifi-
cantly. The adaptive softmax has been shown to
achieve results close to those of the full softmax
while maintaining high GPU efficiency (Merity
等人。, 2018).
Regarding contextual representation models,
ELMo used the sampled softmax and GPT and
BERT resorted to a subword method. Specifi-
卡莉, they used WordPiece (Wu et al., 2016) 或者
BPE (Sennrich et al., 2016) to split the words into
612
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
t
A
C
我
/
我
A
r
t
我
C
e
–
p
d
F
/
d
哦
我
/
.
1
0
1
1
6
2
/
t
我
A
C
_
A
_
0
0
2
8
9
1
9
2
3
6
5
6
/
/
t
我
A
C
_
A
_
0
0
2
8
9
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
subwords and the language models were trained
to take subwords as input and also predict sub-
字. This method is efficient and scalable, 作为
the subword vocabulary can be kept small. 一
potential drawback of these subword-level lan-
guage models, 然而, is that they produce rep-
resentations for fragments of words. 所以,
it takes extra effort to generate word-level repre-
句子 (see the discussion in Section 4.2).
The high cost of the softmax layer has also
been noted in the sentence representation learning
文学. Following the success of Word2Vec
(Mikolov et al., 2013), methods such as SkipThought
(Kiros et al., 2015) have been developed to learn
distributed sentence representations by predicting
the context sentences of a given sentence, 哪个
involves sequentially decoding words of the target
句子. Jernite et al. (2017) and Logeswaran
和李 (2018) notice the inefficiency of the
softmax layer during decoding and propose to use
discriminative instead of generative objectives,
eliminating the need for decoding. 然而, 这些
approaches are not directly applicable to contex-
tual representation learning.
3 Approach
A contextual representation model, at its core, 是
a language model pre-trained on a large unlabeled
语料库. In the following, we review the objective
of language models and the architectures of exist-
ing contextual representation models. We then
introduce the proposed model.
Language model objective Given a set of text
sequences as the training corpus, we can construct
a collection of word-context pairs (w, C), 和
goal of a language model is to predict the word
w based on the context c. In a forward language
模型, the context c is defined as the previous
words in the sequence, whereas for a backward
language model, the context of a word is defined
as the following words. For a masked language
模型, some words in the input sentence are
masked (例如, replaced by a [MASK] 代币) 和
the objective is to predict the masked words from
the remainder. Different contextual representa-
tion models optimize different objectives. 为了
例子, ELMo trains a forward and backward
language model and BERT trains a masked-
language model.
613
Model architecture A typical neural language
model consists of three parts: 1) an input layer, 2)
a sequence encoder, 和 3) a softmax layer. 给定
a word-context pair (w, C), the input layer uses a
word embedding or a character-CNN model (Kim
等人。, 2016) to convert the input words in c into
word vectors. Then the sequence encoder embeds
the context into a context vector c ∈ Rm using a
multi-layer LSTM (Hochreiter and Schmidhuber,
1997), a Gated CNN (Dauphin et al., 2017), 或一个
Transformer (Vaswani et al., 2017). The softmax
layer then multiplies the context vector c with
an output word embedding2 W ∈ RV ×m and
uses a softmax function to produce a conditional
distribution p(w|C) over the vocabulary of size V .
In a language model, the learning objective
我(w, C) 为了 (w, C) is then expressed as:
我(w, C) = − log p(w|C)
= − log softmax(cW T
= −c · w + log Xw′
)
经验值(c · w′
), (1)
where w ∈ Rm is a row from W corresponding
to the target word w and the second term sums
over the vocabulary. After the model is trained, 这
contextual representations are generated from the
latent states of the sequence encoder. 例如,
ELMo combines the hidden states of the LSTMs
to generate contextualized word embedding for
each word in a sentence. We refer the reader to
Peters et al. (2018A) 欲了解详情.
Note that the size of W and the computational
complexity of the second term in Eq. (1) 规模
linearly to the vocabulary size, V . 所以,
when V is large, the softmax layer becomes the
speed bottleneck.
Our approach The scaling issue of softmax also
occurs in other language generation and sequence-
to-sequence models. 在文献中, several ap-
proaches have been proposed to approximate the
softmax layer or bypass it with a subword method
(参见章节 2). 最近, Kumar and Tsvetkov
(2019) propose to treat the context vector as con-
tinuous outputs and directly minimize the distance
2The dimension of the original output from the sequence
encoder may not match the dimension of the output word
embedding. In that case, a projection layer is added after the
original sequence encoder to ensure that the two dimensions
匹配.
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
t
A
C
我
/
我
A
r
t
我
C
e
–
p
d
F
/
d
哦
我
/
.
1
0
1
1
6
2
/
t
我
A
C
_
A
_
0
0
2
8
9
1
9
2
3
6
5
6
/
/
t
我
A
C
_
A
_
0
0
2
8
9
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
between the context vector and the pre-trained
word embedding associated with the target word,
我(w, C) = d(C, w)
(2)
The distance function l could be the L2 distance
c·w
kc − wk2, the cosine distance
kckkwk or a prob-
abilistic distance metric.
We argue that the idea of learning with con-
tinuous outputs particularly suits contextual rep-
resentation learning. As the goal is to obtain a
strong contextual encoder, it makes sense to use a
pre-trained output word embedding and decouple
learning the contextual encoder and the output
embedding. In the remainder of this section, 我们
discuss the computational efficiency of the pro-
posed approach and its combination with the open-
vocabulary word embedding. We also provide an
alternative way to interpret training contextual en-
coders with continuous outputs.
3.1 Computational Efficiency
The continuous output layer has a reduced arith-
metic complexity and trainable parameter size.
We illustrate these improvements and how they
contribute to reducing the overall training time of a
contextual representation model in the following.
用于比较, we include the sampled softmax,
the adaptive softmax, and the subword method in
the discussion.
3.1.1 Learning with Continue Outputs
Arithmetic complexity The arithmetic com-
plexity (IE。, FLOPs) of evaluating loss with con-
tinue outputs (IE。, Eq. 2) takes O(米), 和我们一样
only need to calculate the distance between two
m-dimensional vectors. The complexity of the
sampled softmax is proportional to the number of
negative samples per batch. When the vocabulary
is huge, a large number of negative samples are
需要的 (Jozefowicz et al., 2016). For the adaptive
softmax, the time complexity is determined by the
capacities of the short-list and the rare-word clus-
特尔斯, which grows sub-linearly to the vocabulary
尺寸. The complexity of the subword method is
determined by the subword vocabulary size. 在
对比, the time spent on the continuous output
layer and loss evaluation remains constant with
respect to the vocabulary size and is negligible.
Trainable parameter size The output word
embedding usually takes up a huge part of the
parameters of a language model. 例如, 这
softmax layer in ELMo trained on the One Billion
Word Benchmark (Chelba et al., 2013) takes up
多于 80% of the trainable parameters of
the entire model. Even if an approximation such
as the sampled softmax is used, 的数量
trainable parameters is not reduced. Approaches
like the adaptive softmax reduce the dimension of
softmax embedding for rare words, the trainable
parameter size of which is effectively reduced but
still remains sizable. For a model trained on the
same corpus (Grave et al., 2016), the adaptive
softmax still amounts to 240 million parameters
whereas the sequence encoder has only around
50 million parameters. 相反, we learn
a contextual encoder with Eq. (2) using a pre-
trained word embedding, reducing the trainable
parameters besides the encoder from tens or hun-
dreds of millions to zero.
3.1.2 Overall Training Time
We now discuss how the efficiency improvements
to the output layer contribute to the reduction
of the overall training time, in the context of
synchronous stochastic gradient descent training
on multiple GPUs. 一般来说, the following three
factors determine the training time.
Arithmetic complexity The arithmetic com-
plexity of a model includes the complexity of the
forward and backward propagation on the in-
put layer, the sequence encoder, and the output
层. It also includes the overhead of the opti-
mization algorithm such as gradient clipping and
model updates. The complexity of this optimiza-
tion overhead is often proportional to the number
of parameters that need updating. With the con-
tinuous output layer, not only the arithmetic com-
plexity but also the optimization overhead are
reduced.
GPU memory consumption The training time
is also affected by GPU memory consumption,
as less GPU memory consumption leads to larger
batch size. For the same amount of data and hard-
ware resource, larger batch size means better
parallelism and less training time. Our approach
exhibits small GPU memory footprints, due to
reductions of the arithmetic complexity (和
fewer intermediate results to keep) and trainable
parameter size (with fewer parameters to store).
因此, training with continuous outputs is 2
到 4 times more memory-efficient than with the
softmax layer (参见章节 5.2).
614
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
t
A
C
我
/
我
A
r
t
我
C
e
–
p
d
F
/
d
哦
我
/
.
1
0
1
1
6
2
/
t
我
A
C
_
A
_
0
0
2
8
9
1
9
2
3
6
5
6
/
/
t
我
A
C
_
A
_
0
0
2
8
9
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
Note that as the output word embedding is
fixed, we can keep that embedding in the main
memory and only load the required part to the
GPU memory. Despite the fact that this comes
with an overhead of moving part of the output
word embedding from CPU to GPU memory at
each iteration, the benefit of parallelism often
dominates over the communication overhead on
mainstream hardware, where the GPU memory
is often comparatively limited. We also note that
larger batch size may lead to difficulty in opti-
mization. Several methods have been developed
to ease the large-batch training issue (Goyal et al.,
2017; You et al., 2018). We show that these meth-
ods are sufficient for resolving the optimization
difficulty in our experiment (部分 4).
Communication cost To train large neural net-
work models, using multiple GPUs almost becomes
a necessity. 此外, one way to scale up
current systems is to increase the number of GPUs
用过的. 在这种情况下, the communication cost across
GPUs needs to be taken into consideration. 这
cost occurs from synchronizing the parameters and
their gradients across GPUs, which is proportional
to the size of parameters that need to be updated.
For the sampled softmax, due to the use of the
sparse gradient, the communication cost is pro-
portional to the number of the sampled words. 为了
the adaptive softmax and the subword language
模型, the communication cost is proportional
to the trainable parameter size. The continuous
output layer, 另一方面, incurs little com-
munication cost across GPUs.
3.2 Open-Vocabulary Training
We utilize the open-vocabulary word embedding
as both the input and output layer embedding. Open-
vocabulary word embeddings, such as the FastText
embedding and the MIMICK model (Pinter et al.,
2017), utilize character or subword information to
provide word embeddings. They could represent
an unlimited number of words with a fixed number
of parameters. 因此, we can train contextual
encoders with an open vocabulary, 意思是
we do not need to pre-define a closed word set as
the vocabulary and the model can be trained on
any sequences of words.
Open-vocabulary input layer To be easily ap-
plied in various tasks, the contextual encoder usu-
ally has an open-vocabulary input layer. ELMo
uses a character-CNN but it is relatively slow.
615
Thus we use a pre-trained open-vocabulary word
embedding as the input layer of the contextual
encoder, reducing the time complexity of the input
layer to a negligible level. This also aligns with
the main spirit of our approach, which is to spend
computational resources on the most important
部分, the sequence encoder.
Open-vocabulary output layer For the soft-
max layer, including efficient variants such as the
adaptive softmax, the output vocabulary has to
be pre-defined so that the normalization term can
be calculated. As the softmax layer’s arithmetic
complexity and parameter size grow when the vo-
cabulary size grows, the vocabulary is often trun-
cated to avoid expensive computation.
With the continuous output layer, we can con-
duct training on an arbitrary sequence of words, 作为
long as the output embedding for those words can
be derived. This can be achieved by using the
open-vocabulary embedding. This feature is espe-
cially attractive if we are training on domains or
languages with a long-tail word distribution such
as the biomedical domain, where truncating the
vocabulary may not be acceptable.
3.3 Interpretation of Learning Contextual
Encoders with Continuous Outputs
In the following, we justify the intuition behind
learning with continue outputs and discuss how
the pre-trained word embedding affects the per-
formance of the model.
Language models are essentially modeling the
word-context conditional probability matrix, 那
是, A ∈ RN ×V where Ac,w = p(w|C), N is the
number of all possible contexts, and V is the
vocabulary size (Levy and Goldberg, 2014; 哪个
等人。, 2017). The continuous output layer can
be viewed as modeling A after using the word
embedding as a projection matrix.
To illustrate this, consider the global objective
of the layer with the cosine distance:3
L = X(w,C)
#(w, C)我(w, C)
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
t
A
C
我
/
我
A
r
t
我
C
e
–
p
d
F
/
d
哦
我
/
.
1
0
1
1
6
2
/
t
我
A
C
_
A
_
0
0
2
8
9
1
9
2
3
6
5
6
/
/
t
我
A
C
_
A
_
0
0
2
8
9
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
#(w, C)c · w
= − X(w,C)
= − Xc
= − Xc
#(C)c·X
w
#(C)c·X
w
p(w|C)w,
乙酰胆碱,ww,
3For simplicity, we take the cosine distance as a running
example but the conclusions hold for other distance functions.
模型
输入
Sequence Encoder Output
ELMO
ELMO-C (OURS)
ELMO-A
ELMO-Sub
CNN
FASTTEXTCC
FASTTEXTCC
Subword
LSTM
LSTM w/ LN
LSTM w/ LN
LSTM w/ LN
ELMO-CONEB
ELMO-CRND
ELMO-CCNN
ELMO-CCNN-CC
ELMO-CCC-CNN
FASTTEXTONEB LSTM w/ LN
FASTTEXTCC
LSTM w/ LN
Trained CNN LSTM w/ LN
Trained CNN LSTM w/ LN
LSTM w/ LN
FASTTEXTCC
Sampled Softmax
CONT w/ FASTTEXTCC
Adaptive Softmax
Softmax
CONT w/ FASTTEXTONEB
CONT w/ Random Embedding
CONT w/ Trained CNN
CONT w/ FASTTEXTCC
CONT w/ Trained CNN
桌子 1: Specifications of variants of ELMo models compared in Sections 4 和 5. CONT
means the model has continuous outputs. LN means layer normalization.
在哪里 #(w, C) is the number of occurrences of the
pair (w, C) 在语料库中. We assume all vectors
(c and w) are normalized.
To optimize the inner product between c and
Pw p(w|C)w, we essentially align the direction
of context vector c with the expected word vector
under context c, Pw p(w|C)w = Ew∼p(w|C)w. 在
也就是说, given a word embedding matrix
W ∈ RV ×d, our approach aligns c with the cor-
responding row (AW )C,: in AW . 所以, the ob-
jective can be viewed as conducting multivariate
regression to approximate (AW )C,: given the context.
Based on this view, the performance of the
contextual representation model depends on how
much information of the original matrix A is
preserved after projection with W . In the spirit
of PCA (Jolliffe, 2011), to keep the variance of
A, we would like to have (AW )C,: 和 (AW )c′,:
distant from each other if c and c′ are very different.
所以, a pre-trained word embedding, 哪个
projects words with different meanings into
different positions in space, is a natural choice
for the projection matrix W and can help preserve
much of the variance of A.
4 实验
We accelerate ELMo with the proposed approach
and show that the resulting model ELMO-C is
computationally efficient and maintains competi-
tive performance, compared to the original ELMo
模型 (ELMO), an ELMo variant with the adap-
tive softmax (ELMO-A4), and another variant with
the subword method (ELMO-Sub).
4We include ELMO-A instead of a model with sampled
softmax because the adaptive softmax has been shown to
have superior performance (Grave et al., 2016).
4.1 Setup
楷模
In the following, we introduce the mod-
els in detail. 桌子 1 provides a summary. 这
original ELMo consists of a character-CNN as
the input layer, a forward and backward LSTM
with projection as the sequence encoder, and a
sampled softmax as the output layer. Adagrad
(Duchi et al., 2011) is used as the optimizer. 我们
conduct experiments using the reimplementation
of ELMO in AllenNLP (Gardner et al., 2018) 和
build the others upon it.
The key difference between ELMO-C and ELMO
is that ELMO-C produces continuous outputs and
we train it with the cosine distance loss. A FastText
embedding trained on Common Crawl (米科洛夫
等人。, 2017) (FASTTEXTCC) is used as the output
embedding. Based on preliminary experiments,
we also make three minor changes: 1) 我们用
FASTTEXTCC as the input layer as it is more efficient
than the character-CNN model; 2) we add a layer
norm (Ba et al., 2016) after the projection layer of
the LSTM to improve the convergence speed; 3)
we use Adam with the learning rate schedule from
Chen et al. (2018) to help training with a large
batch size.
Our main goal is to study how different output
layers affect the training speed and performance,
which cannot be achieved by just comparing
ELMO-C and ELMO, due to the aforementioned
minor changes to ELMO-C. 所以, we intro-
duce two additional baseline models (ELMO-A
and ELMO-Sub), which differ from ELMO-C in a
minimal way. 具体来说,
their sequence en-
coders and training recipes are kept the same as
ELMO-C. Thus ELMO-C, ELMO-A, and ELMO-Sub
can be directly compared.
616
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
t
A
C
我
/
我
A
r
t
我
C
e
–
p
d
F
/
d
哦
我
/
.
1
0
1
1
6
2
/
t
我
A
C
_
A
_
0
0
2
8
9
1
9
2
3
6
5
6
/
/
t
我
A
C
_
A
_
0
0
2
8
9
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
ELMOORG BASE FASTTEXTCC
时间
-
Batch
-
Params −
SNLI
Coref
SST-5
NER
SRL
88.7
NA
54.7
92.22
84.6
-
-
-
-
-
-
87.7
88.0
68.90
NA
51.4
51.30 ± 0.77
90.15 90.97 ± 0.43
80.2
81.4
ELMO
14 X 3
128
499中号
ELMO-A
ELMO-Sub
ELMO-C
5.7 X 4
256
196中号
3.9 X 4
320
92中号
2.5 X 4
768
76中号
88.9
72.9
88.5
72.9
52.96 ± 2.26 53.58 ± 0.77 53.02 ± 2.08 53.80 ± 0.73
92.51 ± 0.28 92.28 ± 0.20 92.17 ± 0.56 92.24 ± 0.10
83.4
87.1
72.4
88.8
72.9
82.7
82.4
82.4
桌子 2: Computational efficiency of the main competing models and their performance on five NLP
benchmarks. Time is the overall training time in Days x Cards format. Batch is the maximal batch size
per card. Params is the number of trainable parameters in millions. Due to the small test sizes for NER
and SST-5, we report mean and standard deviation across three runs. Our approach (ELMO-C) exhibits
better computational efficiency and shows comparable performance compared with ELMO, ELMO-A,
and ELMO-Sub.
ELMO-A uses the adaptive softmax as its output
层. We carefully choose the hyper-parameters
of the adaptive softmax to obtain an efficient yet
strong baseline. It has only half of the parameters
of the one reported in Grave et al. (2016) 但
achieves a perplexity of 35.8, lower than ELMO’s
39.7.
ELMO-Sub takes subwords as input and also
predicts subwords. 因此, unlike other models, 它是
vocabulary consists of around 30,000 subwords
created using BPE (Sennrich et al., 2016). 为了
this reason, a lookup-table-style embedding rather
than FASTTEXTCC is used as its input layer and a
vanilla softmax is used as its output layer. Its input
and output word embedding are tied and trained
from scratch.
For reference, we also list the results of ELMo
and the baseline reported in Peters et al. (2018A)
as ELMOORG and BASE. 然而, these models are
evaluated using different configurations. 最后,
we also include FASTTEXTCC a (non-contextual)
word embedding model, as another baseline.
All contextual representation models are trained
on the One Billion Word Benchmark for 10
epochs and the experiments are conducted on a
workstation with 8 GeForce GTX 1080Ti GPUs,
40 Intel Xeon E5 CPUs, and 128G main memory.
Downstream benchmarks We follow Peters
等人. (2018A) and use the feature-based approach
to evaluate contextual representations on down-
stream benchmarks. ELMo was evaluated on six
benchmarks and we conduct evaluations on five
of them. SQuAD (Rajpurkar et al., 2016) 不是
available for implementation reasons.5 In the
following, we briefly review the benchmarks and
task-specific models. For details please refer to
Peters et al. (2018A).
• SNLI (Bowman et al., 2015): The textual
entailment task seeks to determine whether
a ‘‘hypothesis’’ can be entailed from a
‘‘premise’’. The task-specific model is ESIM
(陈等人。, 2017).
• Coref: Coreference resolution is the task
of clustering mentions in text that refer to
the same underlying entities. The data set
is from CoNLL 2012 shared task (Pradhan
等人。, 2012) and the model is from Lee et al.
(2018). Note that we use an improved version
of the Coref system (李等人。, 2017) used in
Peters et al. (2018A).
• SST-5 (Socher et al., 2013): The task in-
volves selecting one of five labels to describe
a sentence from a movie review. We use the
BCN model from McCann et al. (2017).
• NER: The CoNLL 2003 NER task (Sang
and De Meulder, 2003) consists of newswire
from the Reuters RCV1 corpus tagged with
four different entity types. 该模型是一个
biLSTM-CRF from Peters et al. (2018A),
similar to Collobert et al. (2011).
• SRL: Semantic role labeling models the
predicate-argument structure of a sentence. 它
5The SQuAD experiment in Peters et al. (2018A) 曾是
conducted with an implementation in TensorFlow. 这
experiment setting is not currently available in AllenNLP
(https://github.com/allenai/allennlp/
pull/1626), nor can it be easily replicated in PyTorch.
617
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
t
A
C
我
/
我
A
r
t
我
C
e
–
p
d
F
/
d
哦
我
/
.
1
0
1
1
6
2
/
t
我
A
C
_
A
_
0
0
2
8
9
1
9
2
3
6
5
6
/
/
t
我
A
C
_
A
_
0
0
2
8
9
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
seeks to answer ‘‘Who did what to whom’’.
The model is from He et al. (2017) 和
data set is from Pradhan et al. (2013).
For SNLI, SST-5, NER, and SRL, 我们用
the same downstream models as in Peters et al.
(2018A) re-implemented in AllenNLP.6 For Coref,
Peters et al. (2018A) uses the model from Lee et al.
(2017) and we use an improved model (李等人。,
2018) from the same authors. For all the tasks,
the hyper-parameters are tuned to maximize the
performance for the original ELMo and all models
are tested under the same configurations.
4.2 Main Results
We report the main results in Table 2. Our ap-
普罗奇 (ELMO-C) enjoys a substantial compu-
tational advantage while maintaining competitive
or even superior performance, compared to ELMO,
ELMO-A, and ELMO-Sub.
Model efficiency For model efficiency,
这
statistics of ELMO is reported by the original
authors and they use three GTX 1080 Tis. 我们
train ELMO-A, ELMO-Sub, and ELMO-C using
four GTX 1080 Tis. Roughly speaking, compared
with ELMO, ELMO-C is 4.2x faster and 6x more
memory-efficient. To give a clear view of the
speedup the CONT layer brings, we compare
ELMO-C with ELMO-A. ELMO-A differs from
ELMO-C only in the output layer. 仍然, ELMO-
C has a 2.28x speed advantage and is 3x more
memory-efficient. Compared with ELMO-Sub, 我们的
approach holds a 1.56x speed advantage and is
2x more memory-efficient. The results here only
show the overall efficiency of our approach under
the setting of ELMo and a detailed analysis of
the efficiency is desirable, which we provide in
部分 5.2.
Performance on downstream tasks ELMO-C
works especially well on semantic-centric tasks,
such as SNLI, Coref, and SST-5. 然而, 为了
tasks that required a certain level of syntactic
信息, such as NER and SRL (He et al.,
2018), ELMO-C suffers from slight performance
degradation compared with ELMO, but it remains
competitive with ELMO-A and ELMO-Sub. 我们
suspect that the performance degradation is related
to the pre-trained embedding and conduct further
章节分析 5.1.
6For SRL, the score reported by AllenNLP is lower than
the score reported by CoNLL official script.
此外, we notice that the performance of
ELMO-Sub is unstable. It shows satisfying per-
formance on SST-5, NER, and SRL. 然而,
it lags behind on Coref and even fails to outper-
form FASTTEXTcc on SNLI. ELMO-Sub provides
subword-level contextual representations, 哪个
we suspect could be the cause of unstable perfor-
曼斯. 具体来说, to get the representation for a
word in evaluation on word-level tasks, we follow
Devlin et al. (2019) to use the representation of its
first subword. This could be sub-optimal if precise
word-level representation is desired (例如, the suf-
fix of a word is an important feature). These results
are consistent with the observation in Kitaev and
克莱因 (2018). They find that special design has to
be made to apply BERT to constituency parsing
because of the subword segmentation. 然而,
we notice that the scope of our experiment is lim-
ited. It is likely that when ELMO-Sub is scaled
up or used with the fine-tuning method, the afore-
mentioned issue is alleviated—we leave that to
future work.
5 分析
We conduct analysis regarding the effect of the
pre-trained word embedding on the performance
of the contextual encoder. We also investigate the
contributions of different factors to the overall
training time and study the speedup of our ap-
proach under various conditions.
5.1 Effect of the Pre-trained Embedding
We show the effect of the pre-trained embedding
by introducing several variants of ELMO-C (和-
marized in Table 1) and list their performance in
桌子 3.
Quality of the pre-trained embedding We
aim to understand how the quality of the pre-
trained output word embedding W affects the
performance of the contextual encoder. To study
这, we train a FastText word embedding on the
One Billion Word Benchmark, a much smaller
corpus than Common Crawl. We then train an
ELMO-C variant, ELMO-CONEB, by using this em-
bedding in the input and output layers. 康姆-
to ELMO-C, ELMO-CONEB holds up
paring it
surprisingly well and it is competitive on SNLI,
Coref, and SST-5 while being inferior on NER
and SRL.
This motivates us to take a step further and
use a completely random output word embedding.
618
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
t
A
C
我
/
我
A
r
t
我
C
e
–
p
d
F
/
d
哦
我
/
.
1
0
1
1
6
2
/
t
我
A
C
_
A
_
0
0
2
8
9
1
9
2
3
6
5
6
/
/
t
我
A
C
_
A
_
0
0
2
8
9
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
任务
ELMO-C
ELMO-CONEB ELMO-CRND
ELMO-CCNN ELMO-CCNN-CC
ELMO-CCC-CNN
SNLI
Coref
SST-5
NER
SRL
88.8
72.9
53.80 ± 0.73
92.24 ± 0.10
82.4
88.4
72.4
88.4
73.0
52.70 ± 0.90 53.01 ± 1.67
92.03 ± 0.47 91.99 ± 0.35
82.2
82.9
88.0
72.8
88.2
72.9
53.38 ± 0.68 54.33 ± 1.26
92.24 ± 0.36 92.04 ± 0.33
82.8
83.4
88.4
72.6
54.16 ± 0.96
91.93 ± 0.53
83.3
桌子 3: Performance of ablation models on five NLP benchmarks. ELMO-C is included for reference.
We replace the output embedding of ELMO-C
with a random embedding matrix, of which each
element is randomly drawn from a standard normal
分配. We denote this model as ELMO-CRND.
We find that this model performs well (桌子 3),
with only a mild performance drop compared to
ELMO-C. The performance of ELMO-CRND shows
the robustness of the proposed approach and
demonstrates that the deep LSTM is expressive
enough to fit a complex output space. 然而,
we find that the pre-trained input word embedding
is still indispensable because using a randomly
initialized input embedding would lead to brittle
表现 (例如, 85.8 on SNLI).
Pre-trained CNN layer as word embedding
在部分 4, we observed that models using Fast-
Text embedding (ELMO-C and ELMO-A) as input
performed worse than ELMo on SRL, a task
relying heavily on syntactic information. 我们
suspect that the FastText embedding is weaker
on capturing syntactic information than the
character-CNN trained in ELMo (Peters et al.,
2018乙). To verify this, we train ELMO-C using
the trained CNN layer from ELMo as the input
层 (ELMO-CCNN-CC) or the output embedding
(ELMO-CCC-CNN). We observe that the two models
exhibit notably better performance on SRL (看
桌子 3). We also consider a ELMO-CCNN model,
which uses the CNN layer as both the input and
output embedding. On SRL, ELMO-CCNN per-
forms favorably compared to ELMO-C but slightly
worse than ELMO-CCNN-CC or ELMO-CCC-CNN.
We suspect that this is because ELMO-CCNN-CC
and ELMO-CCC-CNN benefit from different kinds
of embeddings in the input layer and the output
层.
5.2 Computational Efficiency
下一个, we study the computational efficiency of the
continuous output layer against several baselines
from two aspects. 第一的, in Section 3.1, we dis-
cussed three factors governing the overall training
time of the model: 1) arithmetic complexity, 2)
GPU memory consumption, 和 3) communica-
tion cost. We aim to study how each factor affects
the overall training time of each model. 第二,
in the above experiments, we focus on ELMo
with the LSTM as the sequence encoder. 我们
wonder whether the continuous output layer can
deliver attractive speedup for sequence encoders
of different types and sizes.
We investigate the continuous output
层
(CONT) and three common baseline output layers:
1) the subword-level language model (SUBWORD),
2) the adaptive softmax layer (ADAPTIVE), 和 3)
the sampled softmax layer (SAMPLED). Addition-
盟友, we include a variant of the sampled softmax
denoted as FIXED where the output word embed-
ding is initialized by the FastText embedding and
fixed during the training. This output layer is
similar to a special case of CONT with a ranking
loss, where the model encourages its output to be
close to the target word embedding but far from a
negative sample.
总共, we study five different output layers.
For several output layers, the trade-off between
computational efficiency and model performance
is controlled by their hyper-parameters. 我们
choose hyper-parameters close to those reported
in the literature to strike a balance between speed
and performance.
5.2.1 Speedup Breakdown
We pair the five different output layers with the
same input layer (fixed word embedding) 和
sequence encoder (ELMo’s LSTM with projec-
的). We then test the training speed of these
models under three scenarios, which are designed
to reflect the individual effect of the arithmetic
复杂, the GPU memory consumption, 和
the communication cost:
• S1 (small batch): We use one GPU card and
set the batch size to be 1. The asynchronous
execution feature of the GPU is disabled. 这
time needed to finish one batch is reported.
619
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
t
A
C
我
/
我
A
r
t
我
C
e
–
p
d
F
/
d
哦
我
/
.
1
0
1
1
6
2
/
t
我
A
C
_
A
_
0
0
2
8
9
1
9
2
3
6
5
6
/
/
t
我
A
C
_
A
_
0
0
2
8
9
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
Vocab
Params Batch
S1 (small batch) S2 (large batch) S3 (multiple GPUs)
CONT
FIXED
SUBWORD
ADAPTIVE
SAMPLED
∞
∞
∞
40K
800K
2000K
40K
800K
76中号 640
76中号 512
92中号 320
97中号 384
196中号 256
213中号 192
96中号 512
483中号 256
64
2000K 1102M
0.47s
1.17X
1.09X
1.08X
1.16X
1.25X
1.07X
1.15X
1.16X
115.28s
1.24X
1.53X
1.30X
1.47X
1.82X
1.18X
1.35X
2.35X
34.58s
1.24X
1.55X
1.34X
1.89X
2.49X
1.30X
1.91X
16.09X
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
t
A
C
我
/
我
A
r
t
我
C
e
–
p
d
F
/
d
哦
我
/
.
1
0
1
1
6
2
/
t
我
A
C
_
A
_
0
0
2
8
9
1
9
2
3
6
5
6
/
/
t
我
A
C
_
A
_
0
0
2
8
9
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
桌子 4: Statistics on the computation efficiency of different models. For CONT, we report the actual
training time in seconds. For other models, we report the relative training time compared to CONT.
Params: Number of trainable parameters of the whole model in millions. Batch: Maximal batch size per
卡片.
• S2 (large batch): We use one GPU card and
the maximal batch size. The time needed to
finish training on one million words for each
model is reported.
• S3 (multiple GPUs): We use 4 GPU cards
and the maximal batch size. The time needed
to finish training on one million words for
each model is reported.
are quite slow and that undermines the advantage
of the continuous output layer. For ELMO-Sub,
the small yet non-negligible softmax layer adds
overhead to the arithmetic complexity. FIXED,
ADAPTIVE, and SAMPLED have similar arithmetic
complexity but ADAPTIVE has the highest com-
plexity when the vocabulary size is large (例如,
2,000K).
表中 4, we report the training speed of
the models under each scenario.7 In addition, 我们
report the parameter size and the maximal batch
size on one GPU card. For ADAPTIVE and SAMPLED,
the vocabulary size also affects the training speed
so we test them under three different vocabulary
sizes:8 40K, 800K, and 2,000K.
Arithmetic complexity The arithmetic com-
plexity of the models is reflected by the speed
under S1, where the GPU memory is always
abundant and the arithmetic complexity is the
dominating factor. CONT holds a mild advan-
tage (1.07x-1.25x) over baseline models, 哪个
is expected because the LSTM layers in ELMo
7CONT under S3 is slightly slower than the ELMO-C model
reported in Section 4.2. This is because when training the
ELMO-C model reported in 4.2, we actually train a forward
ELMO-C on two cards and train a backward ELMO-C on
two other cards, which reduces the communication cost by
half. This optimization is only applicable to our approach in
the setting of ELMo and does not work for other baseline
方法. In this experiment, we disable this optimization for
慷慨.
8The 2,000K vocabulary is created on the tokenized 250-
billion-word Common Crawl corpus (Panchenko et al., 2017),
which covers words that appear more than 397 次.
GPU memory consumption The effect of
GPU memory consumption can be observed by
comparing the statistics under S1 and S2. 这
difference between S2 and S1 is that the parallel
computing of the GPU is fully utilized. 为了
CONT, its great GPU memory efficiency helps
it gain larger speedup under S2, especially against
common baselines such as SUBWORD, ADAPTIVE,
and SAMPLED. For ELMO-Sub, in addition to the
overhead from the softmax layer, breaking words
into subwords leads to longer sequences, 哪个
increases the training time by 1.1x. Thus it is
1.53x slower than CONT under S2. SAMPLED suffers
from its huge parameter size and exhibits poor
scalability with respect to the vocabulary size
(2.35x slower when the vocabulary size reaches
2,000K).
Communication cost The effect of the com-
munication cost across GPUs can be observed
by comparing the statistics under S2 and S3.
As the communication cost and GPU memory
consumption both are highly dependent on the
parameter size, the observations are similar.
620
LSTM LSTMX2 TRANS BASE ELMO TRANS LARGE
GPT
CONT
FIXED
SUBWORD
ADAPTIVE
SAMPLED
3.97s
1.93X
2.32X
4.58X
2.50X
10.42s
1.32X
1.49X
2.20X
1.60X
15.87s
1.52X
1.78X
2.62X
2.91X
34.58s
1.24X
1.55X
1.89X
1.91X
48.55s
1.37X
1.72X
3.28X
OOM
43.53s
1.14X
1.44X
2.33X
8.31X
桌子 5: Time needed to finish training on one million words for each model using
4 GPU cards and the maximal batch size. For CONT, we report the actual training
time in seconds. For other models, we report the relative training time compared to
CONT. OOM means that the GPU memory is not sufficient. CONT shows substantial
speedup over common baselines under all scenarios.
5.2.2 The Continuous Output Layer with
Different Sequence Encoders
For this experiment, we pair the output layers
with different sequence encoders and investigate
their training time. We start from a single-layer
LSTM with a hidden size of 2048 (LSTM) 和
a two-layer version (LSTMX2), both reported in
Grave et al. (2016). They are all smaller than the
sequence encoder used in ELMO. We then scale
up to the forward and backward Transformer
reported in Peters et al. (2018乙) (TRANS BASE)
and the multi-layer LSTM with projection in
ELMO(ELMO). 最后, we test two larger Trans-
以前的, TRANS LARGE, a scaled-up version of TRANS
BASE, and a uni-directional Transformer (denoted
as GPT) with the same size as BERTBASE (Devlin
等人。, 2019) and GPT (Radford et al., 2018),
分别. For all models but GPT, the lengths
of the input sequences are fixed at 20. For GPT,
we use input sequences of length 512, following
its original setting. For ADAPTIVE and SAMPLED, 我们
fix the vocabulary size at 800K.
We report the training time of each model
using four GPU cards and the maximal batch
尺寸 (S3) 表中 5. We find that the continuous
output layer remains attractive, even when the
sequence encoder is as large as GPT. In that case,
the speedup of CONT over SUBWORD, ADAPTIVE,
and SAMPLED is still substantial (1.44X – 8.31X). 在
添加, we observe that for sequence encoders
of the same type, more complex they get, 较少的
speedup CONT enjoys, which is expected. 为了
实例, from LSTM to LSTMX2, the speedup of
CONT decreases noticeably. 然而, the speedup
the continuous output brings also depends on
the architecture of the sequence encoder. 为了
实例, though TRANS BASE and TRANS LARGE are
more complex than LSTMX2, CONT enjoys larger
speedup with those transformers. Profiling the
training process of sequence decoders such as
LSTM and the Transformer on GPU devices is an
interesting research topic but out of the scope of
this study.
6 结论
We introduced an efficient framework to learn
the softmax
contextual representation without
层. The experiments with ELMo showed that
we significantly accelerate the training of the
current models while maintaining competitive
performance on various downstream tasks.
致谢
We wish to thank the anonymous reviewers,
the editor, Mark Yatskar, Muhao Chen, Xianda
周, and members at UCLANLP lab for helpful
comments. We also thank Yulia Tsvetkov and
Sachin Kumar for help with implementing the
continuous output layer as well as Jieyu Zhao,
Kenton Lee, and Nelson Liu for providing re-
producible source code for experiments. This work
was supported by National Science Foundation
grant IIS-1760523 and IIS-1901527.
参考
Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E
欣顿. 2016. Layer normalization. arXiv 预印本
arXiv:1607.06450.
Yoshua Bengio and Jean-S´ebastien Sen´ecal. 2003.
Quick training of probabilistic neural nets by
importance sampling. In AISTATS.
621
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
t
A
C
我
/
我
A
r
t
我
C
e
–
p
d
F
/
d
哦
我
/
.
1
0
1
1
6
2
/
t
我
A
C
_
A
_
0
0
2
8
9
1
9
2
3
6
5
6
/
/
t
我
A
C
_
A
_
0
0
2
8
9
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
Samuel R. Bowman, Gabor Angeli, Christopher
波茨, and Christopher D. 曼宁. 2015. A
large annotated corpus for learning natural
language inference. In EMNLP.
James Bradbury, Stephen Merity, Caiming Xiong,
and Richard Socher. 2016. Quasi-recurrent neu-
ral networks. arXiv 预印本 arXiv:1611.01576.
Ciprian Chelba, Tomas Mikolov, Mike Schuster,
Qi Ge, Thorsten Brants, Phillipp Koehn, 和
Tony Robinson. 2013. One billion word bench-
mark for measuring progress in statistical lan-
guage modeling. arXiv 预印本 arXiv:1312.3005.
Mia Xu Chen, Orhan Firat, Ankur Bapna, Melvin
约翰逊, Wolfgang Macherey, George Foster,
Llion Jones, Mike Schuster, Noam Shazeer,
Niki Parmar, Ashish Vaswani, Jakob Uszkoreit,
Lukasz Kaiser, Zhifeng Chen, Yonghui Wu,
and Macduff Hughes. 2018. The best of both
worlds: Combining recent advances in neural
machine translation. In ACL.
Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei,
Hui Jiang, and Diana Inkpen. 2017. Enhanced
LSTM for natural language inference. In ACL.
Ronan Collobert, Jason Weston, L´eon Bottou,
Michael Karlen, Koray Kavukcuoglu, and Pavel
磷. Kuksa. 2011. Natural language processing
(almost) from scratch. Journal of Machine Learn-
ing Research, 12:2493–2537.
Yann N. Dauphin, Angela Fan, Michael Auli, 和
David Grangier. 2017. Language modeling with
gated convolutional networks. In ICML.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, 和
Kristina Toutanova. 2019. BERT: Pre-training
of deep bidirectional transformers for language
理解. In NAACL-NLT.
John Duchi, Elad Hazan, and Yoram Singer.
2011. Adaptive subgradient methods for online
learning and stochastic optimization. 杂志
Machine Learning Research, 12:2121–2159.
Matt Gardner, Joel Grus, Mark Neumann, Oyvind
Tafjord, Pradeep Dasigi, Nelson F. 刘,
Matthew E. Peters, Michael Schmitz, and Luke
Zettlemoyer. 2018. AllenNLP: A deep semantic
language processing platform. arXiv
natural
preprint arXiv:1803.07640.
Priya Goyal, Piotr Doll´ar, Ross Girshick, Pieter
Noordhuis, Lukasz Wesolowski, Aapo Kyrola,
Andrew Tulloch, Yangqing Jia, and Kaiming
他. 2017. Accurate,
large minibatch SGD:
training Imagenet in 1 小时. arXiv 预印本
arXiv:1706.02677.
Edouard Grave, Armand Joulin, Moustapha Ciss´e,
David Grangier, and Herv´e J´egou. 2016. Effi-
cient softmax approximation for GPUs. arXiv
preprint arXiv:1609.04309.
Luheng He, Kenton Lee, Mike Lewis, and Luke
Zettlemoyer. 2017. Deep semantic role label-
英: What works and what’s next. In ACL.
Shexia He, Zuchao Li, Hai Zhao, and Hongxiao
Bai. 2018. Syntax for semantic role labeling, 到
是, or not to be. In ACL.
Sepp Hochreiter and J¨urgen Schmidhuber. 1997.
Long short-term memory. 神经计算,
9:1735–1780.
Yacine Jernite, Samuel R Bowman, and David
Sontag. 2017. Discourse-based objectives for
fast unsupervised sentence representation learn-
英. arXiv 预印本 arXiv:1705.00557.
Ian Jolliffe. 2011. Principal component analysis. 在
International Encyclopedia of Statistical Science,
施普林格柏林海德堡.
Rafal Jozefowicz, Oriol Vinyals, Mike Schuster,
Noam Shazeer, and Yonghui Wu. 2016. Explor-
ing the limits of language modeling. arXiv pre-
print arXiv:1602.02410.
Yoon Kim, Yacine Jernite, David Sontag, 和
Alexander M. 匆忙. 2016. Character-aware
neural language models. In AAAI.
Ryan Kiros, Yukun Zhu, Ruslan R. Salakhutdinov,
Richard Zemel, Raquel Urtasun, Antonio Torralba,
and Sanja Fidler. 2015. Skip-thought vectors.
In NIPS.
Nikita Kitaev and Dan Klein. 2018. Multilingual
constituency parsing with self-attention and
pre-training. arXiv 预印本 arXiv:1812.11760.
Sachin Kumar and Yulia Tsvetkov. 2019. Von
Mises-Fisher loss for training sequence to
sequence models with continuous outputs. 在
ICLR.
622
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
t
A
C
我
/
我
A
r
t
我
C
e
–
p
d
F
/
d
哦
我
/
.
1
0
1
1
6
2
/
t
我
A
C
_
A
_
0
0
2
8
9
1
9
2
3
6
5
6
/
/
t
我
A
C
_
A
_
0
0
2
8
9
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
Kenton Lee, Luheng He, Mike Lewis, and Luke
Zettlemoyer. 2017. End-to-end neural coref-
erence resolution. In EMNLP.
Kenton Lee, Luheng He, and Luke Zettlemoyer.
2018. Higher-order coreference resolution with
coarse-to-fine inference. In NAACL-HLT.
and Luke Zettlemoyer. 2018A. Deep contex-
tualized word representations. In NAACL-HLT.
Matthew E. Peters, Mark Neumann, 卢克
Zettlemoyer, and Wen-tau Yih. 2018乙. Dissect-
ing contextual word embeddings: Architecture
和代表. In EMNLP.
Tao Lei, Yu Zhang, Sida I. 王, Hui Dai, 和
Yoav Artzi. 2018. Simple recurrent units for
highly parallelizable recurrence. In EMNLP.
Yuval Pinter, Robert Guthrie, and Jacob Eisenstein.
2017. Mimicking word embeddings using sub-
word RNNs. In EMNLP.
Omer Levy and Yoav Goldberg. 2014. Neural word
embedding as implicit matrix factorization. 在
NIPS.
Lajanugen Logeswaran and Honglak Lee. 2018.
An efficient framework for learning sentence
陈述. ICLR.
Bryan McCann,
James Bradbury, Caiming
Xiong, and Richard Socher. 2017. Learned
in translation: Contextualized word vectors. 在
NIPS.
Stephen Merity, Nitish Shirish Keskar, 和
Richard Socher. 2018. An analysis of neural
language modeling at multiple scales. arXiv
preprint arXiv:1803.08240.
Tomas Mikolov, Kai Chen, Greg Corrado, 和
Jeffrey Dean. 2013. Efficient estimation of word
representations in vector space. arXiv 预印本
arXiv:1301.3781.
Tomas Mikolov, Edouard Grave, Piotr Bojanowski,
Christian Puhrsch, and Armand Joulin. 2017.
Advances in pre-training distributed word rep-
resentations. arXiv 预印本 arXiv:1712.09405.
A Mnih and YW Teh. 2012. A fast and simple
training neural probabilistic
algorithm for
language models. In ICML.
Frederic Morin and Yoshua Bengio. 2005. Hier-
archical probabilistic neural network language
模型. In AISTATS.
Alexander Panchenko, Eugen Ruppert, Stefano
Faralli, Simone Paolo Ponzetto, and Chris
Biemann. 2017. Building a web-scale dependency-
parsed corpus from CommonCrawl. arXiv
preprint arXiv:1710.01779.
Matthew E. Peters, Mark Neumann, Mohit Iyyer,
Matt Gardner, Christopher Clark, Kenton Lee,
Sameer Pradhan, Alessandro Moschitti, Nianwen
薛, Hwee Tou Ng, Anders Bj¨orkelund, Olga
Uryupina, Yuchen Zhang, and Zhi Zhong.
2013. Towards robust linguistic analysis using
OntoNotes. In CoNLL.
Sameer Pradhan, Alessandro Moschitti, Nianwen
薛, Olga Uryupina, and Yuchen Zhang. 2012.
CoNLL-2012 shared task: Modeling multi-
lingual unrestricted coreference in ontonotes.
In Joint Conference on EMNLP and CoNLL-
Shared Task.
Alec Radford, Karthik Narasimhan, Tim Salimans,
and Ilya Sutskever. 2018. Improving language
understanding by generative pre-training. OpenAI
Blog.
Alec Radford, Jeffrey Wu, Rewon Child, 大卫
Luan, Dario Amodei, and Ilya Sutskever. 2019.
Language models are unsupervised multitask
learners. OpenAI Blog.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev,
and Percy Liang. 2016. SQuAD: 100, 000+
questions for machine comprehension of text.
In EMNLP.
Erik F Sang and Fien De Meulder. 2003. Introduc-
tion to the CoNLL-2003 shared task: 语言-
independent named entity recognition. arXiv
preprint cs/0306050.
Rico Sennrich, Barry Haddow, and Alexandra
Birch. 2016. Neural machine translation of rare
words with subword units. In ACL.
Richard Socher, Alex Perelygin, Jean Wu, Jason
Chuang, Christopher D. 曼宁, 安德鲁·Y.
的, and Christopher Potts. 2013. Recursive
deep models for semantic compositionality over
a sentiment treebank. In EMNLP.
623
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
t
A
C
我
/
我
A
r
t
我
C
e
–
p
d
F
/
d
哦
我
/
.
1
0
1
1
6
2
/
t
我
A
C
_
A
_
0
0
2
8
9
1
9
2
3
6
5
6
/
/
t
我
A
C
_
A
_
0
0
2
8
9
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
Emma Strubell, Ananya Ganesh, and Andrew
麦卡勒姆. 2019. Energy and policy consid-
erations for deep learning in NLP. arXiv
preprint arXiv:1906.02243.
Shuai Tang, Hailin Jin, Chen Fang, Zhaowen Wang,
and Virginia de Sa. 2018. Speeding up context-
based sentence representation learning with
non-autoregressive convolutional decoding. 在
Workshop on Representation Learning for NLP.
Ashish Vaswani, Noam Shazeer, Niki Parmar,
Jakob Uszkoreit, Llion Jones, Aidan N. Gomez,
Lukasz Kaiser, and Illia Polosukhin. 2017.
Attention is all you need. In NIPS.
Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku
Kudo, Hideto Kazawa, Keith Stevens, 乔治
Kurian, Nishant Patil, Wei Wang, Cliff Young,
Jason Smith, Jason Riesa, Alex Rudnick, Oriol
Vinyals, Gregory S. 科拉多, Macduff Hughes,
和詹姆斯A.. 院长. 2016. Google’s neural
machine translation system: Bridging the gap
between human and machine translation. arXiv
preprint arXiv:1609.08144.
Zhilin Yang, Zihang Dai, Ruslan Salakhutdinov,
and William W. 科恩. 2017. Breaking the soft-
max bottleneck: A high-rank RNN language
模型. arXiv 预印本 arXiv:1711.03953.
Yonghui Wu, Mike Schuster, Zhifeng Chen,
Quoc V. Le, Mohammad Norouzi, Wolfgang
Macherey, Maxim Krikun, Yuan Cao, Qin
高, Klaus Macherey, Jeff Klingner, Apurva
Shah, Melvin Johnson, Xiaobing Liu, Lukasz
Yang You, Zhao Zhang, Cho-Jui Hsieh, James
Demmel, and Kurt Keutzer. 2018. ImageNet
training in minutes. In Proceedings of the 47th
International Conference on Parallel Process-
英, ICPP 2018.
我
D
哦
w
n
哦
A
d
e
d
F
r
哦
米
H
t
t
p
:
/
/
d
我
r
e
C
t
.
米
我
t
.
e
d
你
/
t
A
C
我
/
我
A
r
t
我
C
e
–
p
d
F
/
d
哦
我
/
.
1
0
1
1
6
2
/
t
我
A
C
_
A
_
0
0
2
8
9
1
9
2
3
6
5
6
/
/
t
我
A
C
_
A
_
0
0
2
8
9
p
d
.
F
乙
y
G
你
e
s
t
t
哦
n
0
8
S
e
p
e
米
乙
e
r
2
0
2
3
624
下载pdf