Better Document-Level Machine Translation with Bayes’ Rule

Better Document-Level Machine Translation with Bayes’ Rule

Lei Yu1, Laurent Sartran1, Wojciech Stokowiec1,
Wang Ling1, Lingpeng Kong1, Phil Blunsom1,2, Chris Dyer1

1DeepMind,

2Universität Oxford

{leiyu, lsartran, wstokowiec, lingwang, lingpenk, pblunsom,
cdyer}@google.com

Abstrakt

We show that Bayes’ rule provides an effective
mechanism for creating document translation
models that can be learned from only paral-
lel sentences and monolingual documents—a
compelling benefit because parallel documents
are not always available. In our formulation,
the posterior probability of a candidate transla-
tion is the product of the unconditional (prior)
probability of the candidate output document
and the ‘‘reverse translation probability’’ of
translating the candidate output back into the
source language. Our proposed model uses a
powerful autoregressive language model as
the prior on target language documents, but it
assumes that each sentence is translated inde-
pendently from the target to the source lan-
Spur. Crucially, at test time, when a source
document is observed, the document language
model prior induces dependencies between
the translations of the source sentences in the
hintere. The model’s independence assump-
tion not only enables efficient use of available
Daten, but it additionally admits a practical
left-to-right beam-search algorithm for carry-
ing out inference. Experiments show that our
model benefits from using cross-sentence con-
text in the language model, and it outperforms
existing document translation approaches.

1 Einführung

There have been many recent demonstrations that
neural language models based on transformers
(Vaswani et al., 2017; Dai et al., 2019) are capa-
ble of learning to generate remarkably coherent
documents with few (Zellers et al., 2019) or no
(Radford et al., 2019) conditioning variables.
Despite this apparent generation ability, in prac-
tical applications, unconditional language models
are most often used to provide representations
for natural language understanding applications
(Devlin et al., 2019; Yang et al., 2019; Peters

346

et al., 2018), and how to use them for conditional
generation applications remains an open question.
Our hypothesis in this work is that Bayes’ rule
provides an effective way to leverage powerful
unconditional document language models to im-
prove a conditional
Aufgabe: maschinelle Übersetzung.
The application of Bayes’ rule to transform the
translation modeling problem p(j | X), where y is
the target language, and x is the source language,
has a long tradition and was the dominant para-
digm in speech and language processing for many
Jahre (Brown et al., 1993), where it is often called a
‘‘noisy channel’’ decomposition, by analogy to an
information theoretic conception of Bayes’ rule.

Whereas several recent papers have demon-
strated that the noisy channel decomposition has
benefits when translating sentences one-by-one
(Yu et al., 2017; Yee et al., 2019; Ng et al., 2019),
in this paper we show that this decomposition is
particularly suited to tackling the problem of trans-
lating complete documents. Although using cross-
sentence context and maintaining cross-document
consistency has long been recognized as essen-
tial to the translation problem (Tiedemann and
Scherrer, 2017; Bawden et al., 2018, Unter anderem),
operationalizing this in models has been challeng-
ing for several reasons. Most prosaically, parallel
documents are not generally available (wohingegen
parallel sentences are much more numerous),
making direct estimation of document translation
probabilities challenging. More subtly, documents
are considerably more diverse than sentences, Und
models must be carefully biased so as not to pick
up spurious correlations.

Our Bayes’ rule decomposition (§2) permits
several innovations that enable us to solve these
problems. Eher
than directly modeling the
conditional distribution, we rewrite it as p(j |
X) ∝ p(j) × p(X | j). This changes the learn-
ing problem from estimating a single complex

Transactions of the Association for Computational Linguistics, Bd. 8, S. 346–360, 2020. https://doi.org/10.1162/tacl a 00319
Action Editor: David Chiang. Submission batch: 12/2019; Revision batch: 2/2020; Published 6/2020.
C(cid:13) 2020 Verein für Computerlinguistik. Distributed under a CC-BY 4.0 Lizenz.

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
T

A
C
l
/

l

A
R
T
ich
C
e

P
D

F
/

D
Ö

ich
/

.

1
0
1
1
6
2

/
T

l

A
C
_
A
_
0
0
3
1
9
1
9
2
3
2
2
8

/

/
T

l

A
C
_
A
_
0
0
3
1
9
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3

conditional distribution to learning two different
distributions: a language model p(j), welche
provides unconditional estimates of the output
(in this paper, documents); und p(X | j), welche
provides the probability of translating a candidate
output y into the (observed) source document x.
As we will discuss subsequently, although the
problems of estimating p(j | X) und p(X | j)
are formally similar, independence assumptions
made in p(X | j) are less statistically costly than
they might otherwise be since, at test time, Wir
will be conditioning on x and reasoning about a
posterior distribution over y, which will be jointly
dependent on all (conditionally independent) Teile
of x. This statistical fact—which is the same
trick that gives na¨ıve Bayes classifiers their
expressiveness and ease of estimation—permits
us to assume independence between sentence
translations in the reverse translation model, Und
therefore to use parallel sentences (rather than
parallel documents) to train it. In the posterior, Wir
thus have an implicit estimate of a document-level
translation system, even though we made no use
of parallel documents when estimating the prior
or likelihood models. This is particularly useful
because parallel sentences are much more readily
available than parallel documents. Eine Sekunde
benefit of our approach is that the unconditional
language model can be estimated from nonparallel
Daten, which exists in vast quantities.

Although the noisy channel model is ideal for
exploiting the data resources that naturally exist
in der Welt (large corpora of parallel but inde-
pendent sentences, and large corpora of mono-
lingual documents), we are faced with a much
harder decoding problem (§3). Um das zu erwähnen
Problem, we propose a new beam-search algo-
rithm, exploiting the fact that our document lan-
guage model operates left-to-right, and our reverse
translation model treats sentences independently.
The search is guided by a proposal distribution that
provides candidate continuations of a document
prefix, and these are reranked according to the
posterior distribution. Insbesondere, we compare
two proposal models: one based on estimates of
independent sentence translations (Vaswani et al.,
2017) and one that conditions on the source doc-
ument context (Zhang et al., 2018). Obwohl
closely related, our algorithm is much simpler and
faster than that proposed in Yu et al. (2017). Eher
than using a specially designed channel model
(Yu et al., 2016) which is limited in process-

347

ing long sequences like documents, our condi-
tional sentence independence assumptions allow
us to use any sequence-to-sequence model as the
channel model, making it a better option for
document-level translation.

To explore the performance of our proposed
Modell, we focus on Chinese–English translation,
following a series of papers on document trans-
lation (Zhang et al., 2018; Werlen et al., 2018;
Tu et al., 2018; Xiong et al., 2019). Obwohl
in general it is unreasonable to expect that inde-
pendent translations of sentences would lead to
coherent translations of documents, the task of
translating Chinese into English poses some
particularly acute challenges. As Chinese makes
fewer inflectional distinctions than English does,
and the relevant clues for predicting, Zum Beispiel,
what
tense an English verb should be in, oder
whether an English noun should have singular
or plural morphology, may be spread throughout a
document, it is crucial that extra-sentential context
is used.

Our experiments (§4) explore: (1) anders
approaches to reranking, (2) different indepen-
dence assumptions when modeling documents
(d.h., whether sentences are generated indepen-
dently or not), (3) different amounts of language
modeling data, Und (4) different proposal models.
Briefly summarized, we find that document-
context language models significantly improve the
translation quality obtained with our system, beide
in terms of BLEU scores, and in terms of a human
evaluation. Targeted error analysis demonstrates
the document prior is capable of enforcing con-
sistency of tense and number and lexical choice
across documents.

2 Model Description
We define x = (x1, x2, . . . , xI ) as the source
document with I sentences, and similarly, y =
(y1, y2, . . . , yJ ) as the target document with J
Sätze. Während der (menschlich) translation process,
translators may split or recombine sentences, Aber
1, xi
we will assume that I = J.1 Let xi = (xi
2,
. . . , xi
M ) represent the ith sentence in the docu-
ment, consisting of M words; likewise yi =
(yi
N ) denote the ith sentence in the
target document, containing N words.

2, . . . , yi

1, yi

1Size mismatches are addressed by merging sentences
using sentence alignment algorithms (Gale and Church,
1993).

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
T

A
C
l
/

l

A
R
T
ich
C
e

P
D

F
/

D
Ö

ich
/

.

1
0
1
1
6
2

/
T

l

A
C
_
A
_
0
0
3
1
9
1
9
2
3
2
2
8

/

/
T

l

A
C
_
A
_
0
0
3
1
9
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3

The translation of a document x is determined
by finding the document ˆy, where p(ˆy | X) Ist
optimal.

ˆy = arg max

j

P(j | X).

(1)

Instead of modeling the probability p(j | X)

directly, we factorize it using Bayes’ rule:

ˆy = arg max

j

= arg max

j

P(X | j) × p(j)
P(X)

P(X | j)

× p(j)

.

(2)

channel model
| {z }

language model
|{z}

We further assume that sentences are indepen-
dently translated, and that the sentences are gener-
ated by a left-to-right factorization according to
the chain rule. daher, we have

ˆy ≈ arg max

j

|X|

Y
i=1

P(xi | yi) × p(yi | jPDF Herunterladen