Reducing Confusion in Active Learning for Part-Of-Speech Tagging

Reducing Confusion in Active Learning for Part-Of-Speech Tagging

Aditi Chaudhary1, Antonios Anastasopoulos2,∗, Zaid Sheikh1, Graham Neubig1
1Language Technologies Institute, Carnegie Mellon University
2Department of Computer Science, George Mason University

{aschaudh,zsheikh,gneubig}@cs.cmu.edu

antonis@gmu.edu

Abstrakt

Active learning (AL) uses a data selection
algorithm to select useful training samples to
minimize annotation cost. This is now an es-
sential tool for building low-resource syntactic
analyzers such as part-of-speech (POS) tag-
gers. Existing AL heuristics are generally de-
signed on the principle of selecting uncertain
yet representative training instances, Wo
annotating these instances may reduce a large
number of errors. Jedoch, in an empirical
study across six typologically diverse lan-
guages (Deutsch, Swedish, Galician, Norden
Sami, Persian, and Ukrainian), we found the
surprising result that even in an oracle scenario
where we know the true uncertainty of predic-
tionen, these current heuristics are far from
optimal. Based on this analysis, we pose the
problem of AL as selecting instances that maxi-
mally reduce the confusion between particular
pairs of output tags. Extensive experimenta-
tion on the aforementioned languages shows
that our proposed AL strategy outperforms
other AL strategies by a significant margin.
We also present auxiliary results demonstrat-
ing the importance of proper calibration of
Modelle, which we ensure through cross-view
Ausbildung, and analysis demonstrating how our
proposed strategy selects examples that more
closely follow the oracle data distribution. Der
code is publicly released here.1

1

Einführung

Part-of-speech (POS) tagging is a crucial step for
language understanding, both being used in auto-
matic language understanding applications such
as named entity recognition (NER; Ankita and
Nazeer, 2018) and question answering (QA; Wang
et al., 2018), but also being used in manual lan-

∗Work done at Carnegie Mellon University.
1https://github.com/Aditi138/CRAL.

1

guage understanding by linguists who are attempt-
ing to answer linguistic questions or document
less-resourced languages (Anastasopoulos et al.,
2018). Much prior work (Huang et al., 2015;
Bohnet et al., 2018) on developing high-quality
POS taggers uses neural network methods, welche
rely on the availability of large amounts of
labelled data. Jedoch, such resources are not
readily available for the majority of the world’s
7000 languages (Hammarstr¨om et al., 2018). Fur-
thermore, manually annotating large amounts of
text with trained experts is an expensive and
time-consuming task, even more so when lin-
guists/annotators might not be native speakers of
the language.

Active Learning (Lewis, 1995; Settles, 2009,
AL) is a family of methods that aim to train effec-
tive models with less human effort and cost by
selecting such a subset of data that maximizes the
end model performance. Although many methods
have been proposed for AL in sequence labeling
(Settles and Craven, 2008; Marcheggiani and
Arti`eres, 2014; Fang and Cohn, 2017), durch
an empirical study across six typologically di-
verse languages we show that within the same
task setup these methods perform inconsistently.
Außerdem, even in an oracle scenario, wo wir
have access to the true labels during data selection,
existing methods are far from optimal.

Das

We posit

the primary reason for this
inconsistent performance is that while existing
methods consider uncertainty in predictions, Sie
do not consider the direction of the uncertainty
with respect to the output labels. Zum Beispiel, In
Figur 1 we consider the German token ‘‘die,’’
which may be either a pronoun (PROFI) or de-
terminer (DET). According to the initial model
(iteration 0), ‘‘die’’ was labeled as PRO majority
of the time, but a significant amount of probability
mass was also assigned to other output
tags
(OTHER) for many examples. Based on this,
existing AL algorithms that select uncertain tokens
will likely select ‘‘die’’ because it is frequent and

Transactions of the Association for Computational Linguistics, Bd. 9, S. 1–16, 2021. https://doi.org/10.1162/tacl a 00350
Action Editor: Yuji Matsumoto. Submission batch: 3/2020; Revision batch: 6/2020; Published 02/2021.
C(cid:3) 2021 Verein für Computerlinguistik. Distributed under a CC-BY 4.0 Lizenz.

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
T

A
C
l
/

l

A
R
T
ich
C
e

P
D

F
/

D
Ö

ich
/

.

1
0
1
1
6
2

/
T

l

A
C
_
A
_
0
0
3
5
0
1
9
2
4
2
6
3

/

/
T

l

A
C
_
A
_
0
0
3
5
0
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3

Figur 1: Illustration of selecting representative token-tag combinations to reduce confusion between the output
tags on the German token ‘‘die’’ in an idealized scenario where we know true model confusion.

its predictions are not certain, but they may select
an instance of ‘‘die’’ with either a gold label of
PRO or DET. Intuitively, because we would like
to correct errors where tokens with true labels of
DET are mislabeled by the model as PRO, asking
the human annotator to tag an instance with a true
label of PRO, even if it is uncertain, is not likely
to be of much benefit.

Inspired by this observation, we pose the
problem of AL for POS tagging as selecting tokens
that maximally reduce the confusion between the
output tags. Zum Beispiel, in the example, Wir
would attempt to pick a token-tag pair ‘‘die/DET’’
to reduce potential errors of the model over-
predicting PRO despite its belief that DET is also
a plausible option. We demonstrate the features
of this model in an oracle setting where we know
true model confusions (wie in Abbildung 1), and also
describe how we can approximate this strategy
when we do not know the true confusions.

We evaluate our proposed AL method by run-
ning simulation experiments on six typologically
diverse languages, nämlich, Deutsch, Swedish,
Galician, North Sami, Persian, and Ukrainian,
improving upon models seeded with crosslingual
transfer from related languages (Cotterell and
Heigold, 2017). Zusätzlich, we conduct human
annotation experiments on Griko, an endangered
language that truly lacks significant resources. Unser
contributions are as follows:

1. We empirically demonstrate the shortcom-
ings of existing AL methods under both
conventional and ‘‘oracle’’ settings. Based
on the subsequent analysis, we propose a
new AL method that achieves +2.92 aver-
age per-token accuracy improvement over
existing methods under conventional set-
tings, und ein +2.08 average per-token accuracy
improvement under the oracle setting.

2. We conduct extensive analysis measuring
how the selected data using our proposed
AL method closely matches the oracle data
distribution.

3. We further demonstrate the importance
of model calibration, the accuracy of the
model’s probability estimates themselves,
and demonstrate that cross-view training
(Clark et al., 2018) is an effective way to
improve calibration.

4. We perform human annotation using the
proposed method on an endangered language,
Griko, and find our proposed method to
perform better than the existing methods. In
this process, we collect 300 new token-level
annotations which will help further Griko
NLP.

2 Hintergrund: Active Learning

Generally, AL methods are designed to select
data based on two criteria: ‘‘informativeness’’
and ‘‘representativeness’’ (Huang et al., 2010).
Informativeness represents the ability of
Die
selected data to reduce the model uncertainty on
its predictions, and representativeness measures
how well the selected data represent the entire
unlabeled data. AL is an iterative process and is
typically implemented in a batched fashion for
neural models (Sener and Savarese, 2018). In einem
given iteration, a batch of data is selected using
some heuristic on which the end model is trained
until convergence. This trained model is then used
to select the next batch for annotation, and so
forth.

In this work we focus on token-level AL
Methoden, which require annotation of individual
tokens in context, rather than full sequence anno-
Station, which is more time consuming. Gegeben
an unlabeled pool of sequences D = {x1, x2,

2

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
T

A
C
l
/

l

A
R
T
ich
C
e

P
D

F
/

D
Ö

ich
/

.

1
0
1
1
6
2

/
T

l

A
C
_
A
_
0
0
3
5
0
1
9
2
4
2
6
3

/

/
T

l

A
C
_
A
_
0
0
3
5
0
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3

· · · , xn} and a model θ, (yi,t = j | xi) denotes
the output probability of the output tag j ∈ J
produced by the model θ for the token xi,t in
the input sequence xi. J denotes the set of POS
tags. Most popular methods (Settles, 2009; Fang
and Cohn, 2017) define the ‘‘informativeness’’
using either uncertainty sampling or query-by-
committee. We provide a brief review of these
existing methods.

• Uncertainty Sampling (UNS; Fang and Cohn,
2017) selects the most uncertain word types
in the unlabeled corpus D for annotation.
Erste, it calculates the token entropy H(xi,T;
θ) for each unlabeled sequence xi ∈ D under
model θ, defined as

pi, T, J := Pθ(yi, t = j | xi)
(cid:2)

H(xi, T; θ) = −

pi, T, j log pi, T, J

j ∈ J

Nächste, this entropy is aggregated over all token
occurrences across D to get an uncertainty
score SUNS(z) for each word type z:

(cid:2)

(cid:2)

SUNS(z) =

H(xi, T; θ)

xi∈D

xi, t=z

• Query-by-Commitee

(QBC; Settles

Und
Craven, 2008) selects the tokens having the
highest disagreement between a committee
of models C = {θ1, θ2, θ3, · · · }, welche
is aggregated over all token occurrences.
The token level disagreement scores are
defined as

SDIS(xi, T) = |C|−max

(cid:2)

V (j),

y∈[ ˆyθ1

ich, T, ˆyθ2

ich, T,··· , ˆyθc
ich, T]

where V (j) is number of ‘‘votes’’ received
for the token label y. ˆyθc
ich,t is the prediction
with the highest score according to model θc
for the token xi,T. These disagreement scores
are then aggregated over word types:

(cid:2)

(cid:2)

SQBC(z) =

SDIS(xi, T)

xi∈D

xi, t=z

Endlich, regardless of whether we use an UNS-
based or QBC-based score, the top b word types
with the highest aggregated score are then selected
as the to-label set

XLABEL = b- arg max

z

S(z),

3

where b- arg max selects top b word types
having the highest S(z). Fang and Cohn (2017)
and Chaudhary et al. (2019) further attempt
to include the ‘‘representativeness’’ criterion
by combining uncertainty sampling with a bias
towards high-frequency tokens/spans.

Failings of Current AL Methods Although
these methods are widely used,
in a prelimi-
nary empirical study we found that these existing
methods are less than optimal, and fail to bring
consistent gains across multiple settings. Im Idealfall,
having a single strategy that performs the best
across a diverse language set is useful for other
researchers who plan to use AL for new lan-
guages. Instead of researchers experimenting with
different strategies with human annotation, welche
is costly, having a single strategy known a priori
will reduce both time and human annotation effort.
Speziell, we demonstrate this problem of
inconsistency through a set of oracle experiments,
where the data selection algorithm has access to
the true labels. We hope that these experiments
serve as an upper-bound for their non-oracle
counterparts, so if existing methods do not achieve
gains even in this case, they will certainly be even
less promising when true labels are not available
at data selection time, as is the case in standard
AL.

Concretely, as an oracle uncertainty sampling
method UNS-ORACLE, we select types with the high-
est negative log likelihood of their true label. Als
an ‘‘oracle’’ QBC method QBC-ORACLE, we select
types having the largest number of incorrect pre-
dictions. We conduct 20 AL iterations for each
of these methods across six typologically diverse
languages.2

Erste, we observe that between the oracle
Methoden (Figur 2) no method consistently per-
forms the best across all six languages. Zweite,
we find that just considering uncertainty leads
to unbalanced selection of the resulting tags. To
drive this point across, Tisch 1 shows the output
tags selected for the German token ‘‘zu’’ across
multiple iterations. UNS-ORACLE selects the most
frequent output tag, failing to select tokens from
other output tags. Whereas QBC-ORACLE selects
tokens having multiple tags, the distribution is not
in proportion with the true tag distribution. Unser
hypothesis is that this inconsistent performance
occurs because none of the methods consider the

2More details on the experimental setup in Section §5.

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
T

A
C
l
/

l

A
R
T
ich
C
e

P
D

F
/

D
Ö

ich
/

.

1
0
1
1
6
2

/
T

l

A
C
_
A
_
0
0
3
5
0
1
9
2
4
2
6
3

/

/
T

l

A
C
_
A
_
0
0
3
5
0
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3

define the confusion as the sum of probability
(yi,t = j | xi) of all output tags J other
than the highest probability output tag ˆyi, T:

SCONF(xi, T) = 1 − Pθ(yi,t = ˆyi, T | xi),

then sum this over all instances of type z:

(cid:2)

(cid:2)

SCRAL(z) =

SCONF(xi, T).

xi∈D

xi, t=z

Again selecting the top b types having the
highest score (given by b- arg max) gives us
the most confusing word types (XINIT). Für
each token, we also store the output tag that
had the second highest probability which we
refer to as the ‘‘most confusing output tag’’
for a particular xi, T:

(cid:3)

Ö(xi,T, J) =

1 if j = arg maxj ∈ J \ { ˆyi, T} pi, T, J
0 ansonsten.

For each word type z, we aggregate the
frequency of the most confusing output tag
across all token occurrences:
(cid:2)

(cid:2)

OCRAL(z, J) =

Ö(xi, T, J),

xi∈D

xi, t=z

and compute the output tag with the highest
frequency as the most confusing output tag
for type z:

L(z) = arg max

j ∈ J

OCRAL(z, J).

For each of the top b most confusing word
types, we retrieve its most confusing output
tag, resulting in type-tag pairs given by
LINIT = {(cid:5)z1, j1(cid:6), · · · (cid:5)zb, jb(cid:6)}. This process is
illustrated in steps 7–14 in Algorithm 1.

2. Find the most

representative

token
instances. Now that we have the most
confusing type-tag pairs LINIT, our final step
is selecting the most representative token
instances for annotation. For each type-
tag tuple (cid:5)zk, jk(cid:6) ∈ LINIT, we first retrieve
contextualized representations for all token
occurrences (xi,t = zk) of the word-type zk
from the encoder of the POS model. Wir
express this in shorthand as ci,T := enc(xi,T).
Because the true labels are unknown, there is
no certain way of knowing which tokens have
the ‘‘most confusing output tag’’ as the true
label. daher, each token representation

Figur 2: Illustrating the inconsistent performance of
UNS-ORACLE and QBC-ORACLE methods. The y-axis is
difference in the POS accuracy for these two methods,
averaged across 20 iterations having a batch size 50.

QBC-ORACLE

UNS-ORACLE

ITERATION-1 PART=1
ITERATION-2 PART=1,ADP=1
ITERATION-3 ADV=1,PART=1,ADP=1
ITERATION-4 ADV=1,PART=1,ADP=2

ADP=1
ADP=2
ADP=2
ADP=3

Tisch 1: Each cell is the tag selected for German
token ‘‘zu’’ at each iteration. Gold output tag
distribution for ‘‘zu’’ is ADP=194, PART=103,
ADV=5, PROPN=5, ADJ=1.

confusion between output tags while selecting
Daten. This is especially important for POS tagging
because we find that the existing methods tend to
select highly syncretic word types. Syncretism
is a linguistic phenomenon where distinctions
required by syntax are not realized by morphology,
meaning a word type can have multiple POS
tags based on context.3 This is expected because
syncretic word types, owing to their inherent
ambiguity, cause high uncertainty, which is the
underlying criterion for most AL methods.

3 Confusion-Reducing Active Learning

To address the limitations of the existing methods,
we propose a confusion-reducing active learning
(CRAL) strategy, which aims at reducing the confu-
sion between the output tags. In order to combine
both ‘‘informativeness’’ and ‘‘representativeness’’,
we follow a two-step algorithm:

1. Find the most confusing word types. Der
goal of this step is to find b word types that
would maximally reduce the model confusion
within the output tags. For each token xi,T
in the unlabeled sequence xi ∈ D, we first

3Details can be found in Section §5.2, Tisch 3.

4

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
T

A
C
l
/

l

A
R
T
ich
C
e

P
D

F
/

D
Ö

ich
/

.

1
0
1
1
6
2

/
T

l

A
C
_
A
_
0
0
3
5
0
1
9
2
4
2
6
3

/

/
T

l

A
C
_
A
_
0
0
3
5
0
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3

Algorithm 1 CONFUSION-REDUCING AL
1: D ← unlabeled set of sequences
2: Z ← vocabulary
3: J ← output tag-set
4: b ← active learning batch size
5: (yi, t = j | xi) ← marginal probability
6: pi, T, J := Pθ(yi, t = j | xi)
7: for xi ∈ D do
8:

für (xi, T) ∈ xi do
z ← xi, T
SCRAL(z) ← SCRAL(z)+(1 − pi,T, ˆyi, T)
ˆj ← arg maxj ∈ J \ { ˆyi, T} pi, T, J
OCRAL(z, ˆj) ← OCRAL(z, ˆj)+ 1

9:

10:

11:

12:
13: XINIT
14: for zk ∈ XINIT do
15:

← b- arg maxz∈Z SCRAL(z)

jk ← arg maxj ∈ J OCRAL(zk, J)
for xi, t ∈ D s.t. xi, t = zk do
← enc(xi, T)

cxi, T
Wxi, t = pi, T, jk

∗ cxi, T

16:

17:

18:

19:

XLABEL(zk) = CENTROID{Wxi,t=zk

}

ci,t is weighted with the model confidence of
the most confusing tag jk given by

Wxi, t = Pθ(yi, t = jk | xi) ∗ ci, T

Endlich, the token instance that is closest
to the centroid of this weighted token set
becomes the most representative instance for
annotation. Going forward, we also refer to
the most representative token instance as
the centroid for simplicity.4 This process
is repeated for each of the word-types zk,
resulting in the to-label set XLABEL. Das ist
illustrated in steps 14–19 in Algorithm 1.

During the annotation process,
the selected
representative tokens of each selected confusing
word type are presented in context similar to Fang
and Cohn (2017) and Chaudhary et al. (2019).

4Sener and Savarese (2018) describe why choosing the
centroid is a good approximation of representativeness. Sie
pose AL as a core-set selection problem where a core set
is the subset of data on which the model if trained closely
matches the performance of the model trained on the entire
dataset. They show that finding the core set is equivalent
to choosing b center points such that the largest distance
between a data point and its nearest center is minimized. Wir
take inspiration from this result in using the centroid to be
the most representative instance.

5

4 Model and Training Regimen

Now that we have a method to select data
for annotation, we present our POS tagger in
Section §4.1, followed by the training algorithm
in Section §4.2.

4.1 Model Architecture

Our POS tagging model is a hierarchical neural
conditional random field (CRF) tagger (Ma and
Hovy, 2016; Lample et al., 2016; Yang et al.,
2017). Each token (X, T) from the input sequence x
is first passed through a character-level Bi-LSTM,
followed by a self-attention layer (Vaswani et al.,
2017), followed by another Bi-LSTM to capture
information about subword structure of the words.
Endlich, these character-level representations are
fed into a token-level Bi-LSTM in order to
←−
create contextual representations ct =
ht,
←−
ht are the representations from
Wo
the forward and backward LSTMs, and ‘‘:’’
denotes the concatenation operation. The encoded
representations are then used by the CRF decoder
to produce the output sequence.

−→
ht and

−→
ht :

Because we acquire token-level annotations,
we cannot directly use the traditional CRF, welche
expects a fully labeled sequence. Stattdessen, we use a
constrained CRF (Bellare and McCallum, 2007),
which computes the loss only for annotated tokens
by marginalizing the un-annotated tokens, as has
been used by prior token-level AL models (Fang
and Cohn, 2017; Chaudhary et al., 2019) sowie.
Given an input sequence x and a label sequence
j, traditional CRF computes the likelihood as
follows:

(j|X) =

(cid:4)

N
t=1 ψt(yt−1, yt, X, T)
Z(X)

,

(cid:2)

N(cid:5)

Z(X) =

ψt(yt−1, yt, X, T),

y∈Y(N )

t=1

where N is the length of the sequence, Y(N )
denotes the set of all possible label sequences with
length N . ψt(yt−1, yt, X) = exp(WT
yt−1,ytxt +
byt−1,yt) is the energy function where WT
yt−1,yt
and byt−1,yt are the weight vector and bias
corresponding to label pair (yt−1, yt) jeweils.
In constrained CRF training, YL denotes the set
of all possible sequences that are congruent with
the observed annotations, and the likelihood is
computed as: (YL|X) =

(j|X).

(cid:6)

y∈YL

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
T

A
C
l
/

l

A
R
T
ich
C
e

P
D

F
/

D
Ö

ich
/

.

1
0
1
1
6
2

/
T

l

A
C
_
A
_
0
0
3
5
0
1
9
2
4
2
6
3

/

/
T

l

A
C
_
A
_
0
0
3
5
0
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3

4.2 Cross-view Training Regimen

4.3 Cross-Lingual Transfer Learning

In order to further improve this model, we apply
cross-view training (CVT), a semi-supervised
learning method (Clark et al., 2018). On unlabeled
examples, CVT trains auxiliary prediction mo-
dules, which look at restricted ‘‘views’’ of an
input sequence, to match the prediction from the
full view. By forcing the auxiliary modules to
match the full-view module, CVT improves the
model’s representation learning. Not only does it
help in improving the downstream performance
under low-resource conditions, but also improves
the model calibration overall (§5.4). Having a
well-calibrated model is quite useful for AL, als
a well-calibrated model tends to assign lower
incorrect predictions,
probabilities to ‘‘true’’
which allows the AL measure to select these
incorrect tokens for annotation.

CVT is composed of four auxiliary prediction
modules, nämlich: the forward module θf wd that
makes predictions without looking at the right of
the current token, the backward module θbwd that
makes predictions without looking at the left of
the current token, the future module θf ut that does
not look at either the right context or the current
token, and the past module θpst that does not look
at either the left context or the current token. Der
token representations ct for each module can be
seen as follows:

cfwd
t =
cfut
t =

−→
ht,
−−→
ht−1,

cbwd
t =
cpst
t =

←−
ht,
←−−
ht+1.

cfull
t =

−→
ht :

←−
ht.

For an unlabeled sequence x, the full-view model
targets pθ(j|X) nach
θf ull first produces soft
inference. CVT matches the soft predictions from
V auxiliary modules by minimizing their KL-
divergence. Although CRF produces a probability
distribution over all possible output sequences, für
computational feasibility we compute the token-
level KL-divergence using pθ(yt|X), which is the
marginal probability distribution of token (X, T)
over all output tags T . This is calculated easily
from the forward-backward algorithm:

LCVT=

1
|D|

(cid:2)

(cid:2)

V(cid:2)

xi∈D

xi,t∈xi

v=1

KL(pf ull

θ

||pv
θ)

pf ull
θ

:= P f ull
θ

(yi,t=j | xi) and pv

θ := P v

θ (yi, t=j|xi)

Wo |D| is the total unlabeled examples in D.

6

Using the architecture described above, for any
given target language we first train a POS model
on a group of related high-resource languages. Wir
then fine-tune this pre-trained model on the newly
acquired annotations on the target language, als
obtained from an AL method. The objective of
cross-lingual transfer learning is to warm-start the
POS model on the target language. Several meth-
ods have been proposed in the past including
annotation projection (Zitouni and Florian, 2008),
and model transfer using pre-trained models such
as m-BERT (Devlin et al., 2019). In this work our
primary focus is on designing an active learning
method, so we simply pre-train a POS model on a
group of related high-resource languages (Cotterell
and Heigold, 2017), which is a computationally
cheap solution, a crucial requirement for running
multiple AL iterations. Außerdem, recent work
(Siddhant et al., 2020) has shown the advantage
of pre-training using a selected set of related lan-
guages over a model pre-trained over all available
languages.

Following this, for a given target language we
first select a set of typologically related languages.
An initial set of transfer languages is obtained
using the automated tool provided by Lin et al.
(2019), which leverages features such as phylo-
genetic similarity, typology, lexical overlap, Und
size of available data, in order to predict a list of
optimal transfer languages. This list can be then
refined using the experimenter’s intuition. Endlich,
is trained on the concatenated
a POS model
corpora of
Zu
Johnson et al. (2017), a language identification
token is added at the beginning and end of each
sequence.

the related languages. Similar

5

Simulation Experiments

In diesem Abschnitt, we describe the simulation exper-
iments used for evaluating our method. Under this
setting, we use the provided training data as our
unlabeled pool and simulate annotations by using
the gold labels for each AL method.

Datasets: For the simulation experiments, Wir
test on six typologically diverse languages:
Deutsch, Swedish, North Sami, Persian, Ukrainian,
and Galician. We use data from the Universal
Dependencies (UD) v2.3 (Nivre et al., 2016; Nivre
et al., 2018; Kirov et al., 2018) treebanks with the

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
T

A
C
l
/

l

A
R
T
ich
C
e

P
D

F
/

D
Ö

ich
/

.

1
0
1
1
6
2

/
T

l

A
C
_
A
_
0
0
3
5
0
1
9
2
4
2
6
3

/

/
T

l

A
C
_
A
_
0
0
3
5
0
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3

TARGET LANGUAGE

TRANSFER LANGUAGES (TREEBANK)

English (en-ewt) + Dutch (nl-alpino)
norwegisch (no-nynorsk) + Danish (da-ddt)

Deutsch (de-gsd)
Swedish (sv-lines)
North Sami (sme-giella) Finnish (fi-ftb)
Persian (fa-seraji)
Galician (gl-treegal)
Ukrainian (uk-iu)

Urdu (ur-udtb) + Russian (ru-gsd)
Spanish (es-gsd) + Portuguese (pt-gsd)
Russian (ru-gsd)

Griko

Greek (el-gdt) + Italian (it-postwita)

Tisch 2: Dataset details describing the group
of related languages over which the model was
pre-trained for a given target language.

same train/dev/test split as proposed in McCarthy
et al. (2018). For each target language, the set of
related languages used for pre-training is listed
in Table 2. Persian and Urdu datasets being in
the Perso-Arabic script, there is no orthography
overlap along the transfer and the target languages.
daher, for Persian we use uroman,5 a publicly
available tool for romanization.

Baselines: As described in Section §2, we com-
pare our proposed method (CRAL) with Uncertainty
Sampling (UNS) and Query-by-commitee (QBC). Wir
also compare with a random baseline (RAND) Das
selects tokens randomly from the unlabeled data
D. For QBC, we use the following committee of
models C = {θf wd, θbwd, θf ull}, where θi are the
CVT views (§4.2). We do not include the θf ut and
θpst as they are much weaker in comparison to the
other views.6 For CRAL, UNS, and RAND, we use the
full model view.

Model Hyperparameters: We use a hidden
size of 25 for the character Bi-LSTM, 100 für die
modeling layer, Und 200 for the token-level Bi-
LSTM. Character embeddings are 30-dimensional
and are randomly initialized. We apply a dropout
von 0.3 to the character embeddings before inputting
to the Bi-LSTM. A further 0.5 dropout is applied
to the output vectors of all Bi-LSTMs. The model
is trained using the SGD optimizer with learning
rate of 0.015. The model is trained till convergence
over a validation set.

Active Learning Parameters: For all AL
Methoden, we acquire annotations in batches of

5https://www.isi.edu/∼ulf/uroman.html.
6We chose CVT views for QBC over the ensemble for
computational reasons. Training three models independently
would require three times the computation. Given that for
each language we run 20 experiments, amounting to a total
von 120 experiments, reducing the computational burden was
bevorzugt.

Figur 3: Our method (CRAL) outperforms existing AL
methods for all six languages. y-axis is the difference
in POS accuracy between CRAL and other AL methods,
averaged across 20 iterations with batch size 50.

Figur 4: Comparison of the POS performance across
the different methods for 20 AL iterations for German.

50 and run 20 simulation experiments resulting in
insgesamt 1000 tokens annotated for each method.
We pre-train the model using the above parameters
and after acquiring annotations, we fine-tune it
with a learning rate proportional to the number of
sentences in the labeled data lr = 2.5e−5|XLABEL
|.

5.1 Ergebnisse

Figur 3 compares our proposed CRAL strategy
with the existing baselines. The y-axis repre-
sents the difference in POS tagging performance
between two AL methods and is measured by
accuracy. The accuracy is averaged across 20 Es-
erations. Across all six languages, our proposed
method CRAL shows significant performance gains
over the other methods. In Abbildung 4 we plot the
individual accuracy values across the 20 Iterationen
for German and we see that our proposed method
CRAL performs consistently better across multiple
Iterationen. We also see that the zero-shot model
on German (iteration-0) gets a decent warm start
because of cross-lingual transfer from Dutch and
English.

Außerdem, to check how the performance of
the AL methods is affected by the underlying
POS tagger architecture, we conduct additional

7

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
T

A
C
l
/

l

A
R
T
ich
C
e

P
D

F
/

D
Ö

ich
/

.

1
0
1
1
6
2

/
T

l

A
C
_
A
_
0
0
3
5
0
1
9
2
4
2
6
3

/

/
T

l

A
C
_
A
_
0
0
3
5
0
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3

TARGET LANGUAGE

CRAL

UNS

QBC

Deutsch
Swedish
North Sami
Persian
Galician
Ukrainian

0.0465
0.0811
0.0270
0.0384
0.0722
0.0770

0.0801
0.1196
0.0328
0.0583
0.0953
0.1067

0.0849
0.1013
0.0346
0.0444
0.0674
0.0665

Tisch 4: Wasserstein distance between the
output tag distributions of the selected data
and the gold data, lower the better. The above
results are after 200 annotated tokens, d.h.,
four AL iterations.

types that can have multiple POS tags based on
Kontext. We compute the Wasserstein distance
(a metric to compute distance between two
probability distributions) between the annotated
tag distribution and the true tag distribution for
each word type z.

W D(z) =

(cid:2)

j ∈ Jz

J (z) − p∗
pAL

J(z)

where Jz is the set of output tags for a word
type z in the selected active learning data. pAL
J (z)
denotes the proportion of tokens annotated with
tag j in the selected data and p∗
j is the proportion of
tokens having tag j in the entire gold data. Lower
Wasserstein distance suggests high similarity
between the selected tag distribution and output
tag distribution. Given that each iteration selects
unique tokens, this distance can be computed
after each of n iterations. Tisch 4 shows that our
proposed strategy CRAL selects data that closely
matches the gold data distribution for four out of
the six languages.

Iterationen,

How effective is the AL method in reducing
confusion across iterations? Across iterations,
as more data is acquired we expect the incorrect
predictions from the previous iterations to be
rectified in the subsequent
ideally
without damaging the accuracy of existing
Vorhersagen. Jedoch, as seen in Table 3, Die
AL methods have a tendency to select syncretic
word types, which suggests that across multiple
iterations the same word types could get selected
albeit under a different context. This could lead to
more confusion, thereby damaging the existing
accuracy if the selected type is not a good
representative of its annotated tag. daher,
we calculate the number of existing correct

5: Comparing

in POS
Figur
performance across the AL methods with BRNN/MLP
architecture, averaged across 20 Iterationen.

difference

Die

TARGET LANGUAGE

UNS

QBC

CRAL

Deutsch
Swedish
North-Sami
Persian
Galician
Ukrainian

74 %
56 %
10 %
50 %
40 %
38 %

76 %
54 %
12 %
46 %
42 %
48 %

82 %
62 %
14 %
46 %
44 %
48 %

Tisch 3: Percentage of syncretic word types in
the first iteration of active learning (consisting
von 50 types).

experiments with a different architecture. Wir
replace the CRF layer with a linear layer and use
token level softmax to predict the tags, keeping the
encoder as before. We present the results for four
(North Sami, Swedish, Deutsch, Galician) of the
six languages in Figure 5. Our proposed method
CRAL still always outperforms QBC. We observe
that only for North Sami does UNS outperform
CRAL, which is similar to the results obtained from
BRNN/CRF architecture, where the CRAL performs
at par with UNS.

5.2 Analyse

In the previous section, we compared the different
AL methods by measuring the average POS
accuracy. In diesem Abschnitt, we perform intrinsic
evaluation to compare the quality of the selected
data on two aspects:

How similar are the selected and the true
data distributions? To measure this similarity,
we compare the output tag distribution for each
word type in the selected data with the tag dis-
tribution in the gold data. This evaluation is nec-
essary because there are a significant number of
syncretic word types in the selected data as seen in
Tisch 3. To recap, syncretic word types are word

8

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
T

A
C
l
/

l

A
R
T
ich
C
e

P
D

F
/

D
Ö

ich
/

.

1
0
1
1
6
2

/
T

l

A
C
_
A
_
0
0
3
5
0
1
9
2
4
2
6
3

/

/
T

l

A
C
_
A
_
0
0
3
5
0
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3

Figur 6: Confusion score measures the percentage of
correct predictions in the first iteration which were
incorrectly predicted in the second iterations. Lower
values suggest that the selected annotations in the
subsequent iterations cause less damage on the model
trained on the existing annotations.

Figur 7: In the oracle setting, our method (CRAL-
ORACLE) outperforms UNS-ORACLE and QBC-ORACLE in
most cases, while the non-oracle CRAL matches the
performance of its oracle counterpart. The y-axis
measures the difference in average accuracy across
20 iterations between the methods being compared.

predictions that were incorrectly predicted in
the subsequent iteration, and present the results
in Abbildung 6. A lower value suggests that the
AL method was effective in improving overall
accuracy without damaging the accuracy from
existing annotations, and thereby was successful in
reducing confusion. From Figure 6, the proposed
strategy CRAL is clearly more effective than the
others in most cases in reducing confusion across
Iterationen.

5.3 Oracle Results

In order to check how close to optimal our
proposed method CRAL is, we conduct ‘‘oracle’’
comparisons, where we have access to the gold
labels during data selection. The oracle versions
of existing methods UNS-ORACLE and QBC-ORACLE
have already been described in Section §2. For our
proposed method CRAL, we construct the oracle
version as follows:

CRAL-ORACLE: Select
the types having the
highest incorrect predictions. Within each type,
select that output tag that is most incorrectly
predicted. This gives the most confusing output
tag for a given word type. From the tokens having
the most confusing output tag, select the token
representative by taking the centroid of their
respective contextualized representations, ähnlich
to the procedure described in Section § 3.

Figur 7 compares the performance gain of the
POS model trained using CRAL-ORACLE over UNS-
ORACLE and QBC-ORACLE (Figure 7.a, 7.B). Sogar
under the ‘‘oracle’’ setting, our proposed method
performs consistently better across all languages
(except Ukrainian), unlike the existing methods,
as seen in Figure 2. CRAL closely matches the
performance of its corresponding ‘‘oracle’’ CRAL-
Die
ORACLE (Figure 7.c) which suggests that

proposed method is close to an optimal AL
method. Jedoch, we note that CRAL-ORACLE is not
a ‘‘true’’ upper bound as for Ukrainian it does not
outperform CRAL. We find that for Ukrainian, hoch
Zu 250 tokens, the oracle method outperforms the
non-oracle method after which it underperforms.
We hypothesize that this inconsistency is due to
noisy annotations in Ukrainian. On analysis we
found that the oracle method predicts numerals
as NUM but in the gold data some of them are
annotated as ADJ. We also find several tokens to
have punctuations and numbers mixed with the
letters.7

In order to verify whether CRAL is accurately
selecting data at near-oracle levels, we analyze
the intermediate steps leading to the data selec-
tion. For each selected word type z ∈ XLABEL,
we analyze how well our proposed method of
weighting encoder representations with the model
confidence of the most confused tag and taking
the centroid actually succeeds at ‘‘representative’’
token selection. If this is indeed the case, tokens
in the vicinity of the centroid should also have the
same ‘‘most confused tag’’ as their predicted label
and thereby be mis-classfied instances. To verify
this hypothesis we compare how many of the 100
tokens closest to the centroid (in the representation
Raum) (XNN(z)) are truly misclassified. This score
is given by p(z) for each selected word-type z:

XNN(z) = b- arg min
xi,t=z∈D

|ci,t − cz|

P(z) =

|

|ˆyi,T (cid:10)= y∗
ich,T
|XNN(z)|

7This is also noted in the UD page: https://
universaldependencies.org/treebanks/uk iu
/index.html.

9

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
T

A
C
l
/

l

A
R
T
ich
C
e

P
D

F
/

D
Ö

ich
/

.

1
0
1
1
6
2

/
T

l

A
C
_
A
_
0
0
3
5
0
1
9
2
4
2
6
3

/

/
T

l

A
C
_
A
_
0
0
3
5
0
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3

EXPERIMENT SETTING

EN + NO → EN

EN + NEIN + DE-200 → DE

SCE

CVT
− 0.0190
0.0174
+
− 0.1658
0.1391
+

ACCURACY

95.53
95.58
69.90
74.61

Tisch 5: Evaluating the effect of CVT across
two experimental settings. EN: English, NEIN:
norwegisch, DE-200: 200 German annotations.
Left of ‘‘→’’ are the pre-training languages and
the right of ‘‘→’’ is the language on which this
model is evaluated. Accuracy measures the POS
model performance (higher is better) and SCE
measures the model calibration (lower is better).

are placed in bin 1, the next 10% in bin 2, Und
bald. We conduct two ablation experiments to
measure the effect of CVT. Erste, we train a joint
POS model on English and Norwegian datasets
using all available training data, and evaluate on
the English test set. Zweite, we use this pre-
trained model and fine-tune on 200 randomly
sampled German data and evaluate on German
test data. We train models with and without CVT,
denoted by +/- in Table 5. We find that with CVT
results both in higher accuracy as well as lower
calibration error (SCE). This effect of CVT is
much more pronounced in the second experiment,
which presents a low-resource scenario and is
common in an AL framework.

6 Human Annotation Experiment

In diesem Abschnitt, we apply our proposed approach
on Griko, an endangered language spoken by
around 20,000 people in southern Italy, im
Grec`ıa Salentina area southeast of Lecce. Der
only available online Griko corpus, referred to
as UoI (Lekakou et al., 2013),8 consists of 330
utterances by nine native speakers having POS
annotations. Zusätzlich, Anastasopoulos et al.
(2018) collected, processed, and released 114
stories, of which only the first 10 stories were
annotated by experts and have gold-standard an-
notations. We conduct human annotation exper-
iments on the remaining unannotated stories in
order to compare the different active learning
Methoden.

Setup: We use Modern Greek and Italian as
the two related languages to train our initial

8http://griko.project.uoi.gr.

Figur 8: We report the mean and median of p over
all the 50 token-tag pairs selected by the first AL
iteration of CRAL. We see that across all languages
majority of the token-tag pairs satisfy the criteria of
using weighted representations with centroid for token
Auswahl.

where b = 100. cz is the contextualized rep-
resentation of the representative instance for the
word-type z, nämlich, the centroid and ci,t is the
contextualized representation of z’s token instance
xi,T. y∗
ich,t and ˆyi,t are the true and predicted labels
of xi,T. We report the average and median of
p across all the selected tokens of the first AL
iteration in Figure 8. We see that for all languages
the median is high (d.h., > 0.8) which suggests
that the majority of the token-tag pairs satisfy this
Kriterien, thus supporting the step of weighting the
token representations and choosing the centroid
for annotation.

We also compare the percent of token-tag
overlap between the data selected from CRAL with
its oracle counterpart: CRAL-ORACLE. For the first
AL iteration, the proposed method CRAL has more
als 50% overlap with the oracle method for all
languages, providing some evidence as to why
CRAL is matching the oracle performance.

5.4 Effect of Cross-View Training
As mentioned in Section § 4.2, we use CVT to not
only improve our model overall but also to have
a well-calibrated model that can be important for
active learning. A model is well-calibrated when a
model’s predicted probabilities over the outcomes
reflects the true probabilities over these outcomes
(Nixon et al., 2019). We use Static Calibration
Error (SCE), a metric proposed by Nixon et al.
(2019) to measure the model calibration. SCE bins
the model predictions separately for each output
tag probability and computes the calibration error
within each bin which is averaged across all the
bins to produce a single score. For each output tag,
bins are created by sorting the predictions based on
the output class probability. Somit, the first 10%

10

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
T

A
C
l
/

l

A
R
T
ich
C
e

P
D

F
/

D
Ö

ich
/

.

1
0
1
1
6
2

/
T

l

A
C
_
A
_
0
0
3
5
0
1
9
2
4
2
6
3

/

/
T

l

A
C
_
A
_
0
0
3
5
0
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3

AL

ITERATION-0

ITERATION-1

ITERATION-2

ITERATION-3

IA Agr. WD

Linguist-1

Linguist-2

Linguist-3
(Expert)

CRAL

QBC

UNS

CRAL

QBC

UNS

CRAL

QBC

UNS

52.93
52.93
52.93
52.93
52.93
52.93
52.93
52.93
52.93

63.42 (10)
55.82 (15)
56.14 (15)
61.24 (15)
56.52 (20)
55.45 (17)
65.63
60.50
58.51

69.07 (10)
62.03 (17)
57.04 (15)
67.24 (20)
65.96 (20)
58.80 (17)
69.17
65.69
64.26

65.16 (16)
66.51 (15)
65.73 (11)
67.05 (18)
66.71 (17)
65.73 (20)
68.09
56.20
65.93

0.58
0.68
0.58
0.70
0.72
0.70


0.281
0.243
0.379
0.346
0.245
0.363
0.159
0.170
0.125

Tisch 6: Griko test set POS accuracy after each AL annotation iteration. Each iteration consists of 50
token-level annotations. The number in parentheses is the time in minutes required for annotation.
The IA AGR. column reports the inter-annotator agreement against the expert linguist for the first
iteration. WD is the Wasserstein distance between the selected tokens and the test distribution.

POS model.9 To further improve the model,
we fine-tune on the UoI corpus, which consists
von 360 labeled sentences. We evaluate the AL
performance on the 10 gold-labelled stories from
Anastasopoulos et al. (2018), of which the first two
stories, umfassend 143 labeled sentences, werden verwendet
as the validation set and the remaining 800 beschriftet
sentences form the test set. We use the unannotated
stories as our unlabeled pool. We compare CRAL
with UNS and QBC, conducting three AL iterations
for each method, where each iteration selects
grob 50 tokens for annotation. The annotations
are provided by two linguists familiar with mo-
dern Greek and somewhat familiar with Griko.
To familiarize the linguists with the annotation
interface, a practice session was conducted in
modern Greek. In the interface, tokens that need
to be annotated are highlighted and presented
with their surrounding context. The linguist then
simply selects the appropriate POS tag for each
highlighted token. Because we do not have
gold annotations for these experiments, we also
obtained annotations from a third linguist who is
more familiar with Griko grammar.

Ergebnisse: Tisch 6 presents the results on three
iterations for each AL method, with our proposed
method CRAL outperforming the other methods in
most cases. We note that we found several frequent
tokens (d.h., 863/13,740 tokens) in the supposedly
gold-standard Griko test data to be inconsistently
annotated. Speziell, the original annotations

9With Italian being the dominant language in the region,

code switching phenomena appear in the Griko corpora.

did not distinguish between coordinating (CCONJ)
and subordinating conjunctions (SCONJ), nicht wie
the UD schema. Infolge, when converting the
test data to the UD schema all conjunctions were
tagged as subordinating ones. Our annotation tool,
Jedoch, allowed for either CCONJ or SCONJ as
tags and the annotators did make use of them.
With the help of a senior Griko linguist (Linguist-
3), we identified a few types of conjunctions that
are always coordinating: variations of ‘‘and’’ (ce
and c’), and of ‘‘or’’ (e or i). We fixed these
annotations and used them in our experiments.

For Linguist-1, we observe a decrease in
performance in Iteration-3. One possible reason
for this decrease is attributed to Linguist-1’s poor
annotation quality, which is also reflected in their
low inter-annotator agreement scores. We observe
a slight decrease for other linguists, which we
hypothesize is due to domain mismatch between
the annotated data and the test data. Tatsächlich, the test
set stories and the unlabeled ones originate from
different time periods spanning a century, welche
can lead to slight differences in orthography and
usage. Zum Beispiel, after three AL iterations,
the token ‘‘i’’ had been annotated as CONJ
twice and DET once, whereas in the test data
all instances of ‘‘i’’ are annotated as DET. Similar
to the simulation experiments, wir berechnen die
confusion score for all linguists in Figure 9. Wir
find that, unlike in the simulation experiments,
a model trained with UNS causes less damage on
the existing annotations as compared to CRAL.
Jedoch, we note that the model performance
from the UNS annotations is much lower than CRAL
to begin with.

11

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
T

A
C
l
/

l

A
R
T
ich
C
e

P
D

F
/

D
Ö

ich
/

.

1
0
1
1
6
2

/
T

l

A
C
_
A
_
0
0
3
5
0
1
9
2
4
2
6
3

/

/
T

l

A
C
_
A
_
0
0
3
5
0
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3

CRAL

UNS

QBC

Linguist-1
Linguist-2
Linguist-3

90
84
74

95
88
90

72
72
61

Tisch 7: Each cell denotes the number of
annotated tokens that are also present in the
test data.

better tagging accuracy because the WD metric is
only informing us how good an AL strategy is in
selecting data that matches closely the gold output
tag distribution for that selected data.

7 Related Work

Active Learning for POS Tagging: AL has
been widely used for POS tagging. Garrette
and Baldridge (2013) use a graph-based label
propagation to generalize initial POS annotations
they find
to the unlabeled corpus. Weiter,
type-
that under a constrained time setting,
level annotations prove to be more useful than
token-level annotations. In line with this, Fang
and Cohn (2017) also select informative word
types based on uncertainty sampling for low-
resource POS tagging. They also construct a
tag dictionary from these type-level annotations
and then propagate the labels across the entire
unlabeled corpus. Jedoch, in our initial analysis
on uncertainty sampling, we found that adding
label-propagation harmed the accuracy in cer-
tain languages because of prevalent syncretism.
Ringger et al. (2007) present different variations
of uncertainty-sampling and QBC methods for
POS tagging. Similar to Fang and Cohn (2017),
they find uncertainty sampling with frequency
bias to be the best strategy. Settles and Craven
(2008) present a nice survey on the different ac-
tive learning strategies for sequence labeling tasks,
and Marcheggiani and Arti`eres (2014) discuss
the strategies for acquiring partially labeled data.
Sener and Savarese (2018) propose a core-set
selection strategy aimed at finding the subset
that is competitive across the unlabeled dataset.
This work is most similar to ours with respect
to using geometric center points as being the
most representative. Jedoch, to the best of our
Wissen, none of the existing works are targeted
at reducing confusion within the output classes.

Figur 9: Confusion scores for the three Griko linguists.
Lower values suggest that the selected annotations in
the subsequent iterations cause less damage on the
model trained on existing annotation.

expert

Iteration-1 with the

We also compute the inter-annotator agreement
bei
(Linguist-3)
(Tisch 6). We find that the agreement scores are
lower than one would expect (c.f. the annotation
test run on modern Greek, for which we have gold
annotations, yielded much higher interannotator
agreement scores over 90%). The justification
probably lies with our annotators having limited
knowledge of Griko grammar, while our AL
methods require annotations for ambiguous and
this is a common
‘‘hard’’ tokens. Jedoch,
scenario in language documentation where often
linguists are required to annotate in a language
they are not very familiar with, which makes this
task even more challenging. We also recorded
the annotation time needed by each linguist for
each iteration in Table 6. Compared with the UNS
method, the linguists annotated (avg.) 2.5 minutes
faster using our proposed method which suggests
that UNS tends to select harder data instances for
annotation.

Similar to the simulation experiments, we report
the Wasserstein distance (WD) for all linguists
in Table 6. Jedoch, unlike in the simulation
setting where the WD was computed with the
gold training data, for the human experiments
we do not have access to the gold annotations
and therefore computed WD with the gold test
data which however, is from a slightly different
Domain, which affects the results somewhat.
We observe that QBC has lower WD scores for
Linguist-1 and Linguist-2 and UNS for Linguist-3.
On further analysis, we find that even though QBC
has lower WD, it also has the least coverage of
the test data—that is, it has the fewest number of
annotated tokens which are present in the test data,
as shown in Table 7. We would like to note that a
lower WD score does not necessarily translate to

12

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
T

A
C
l
/

l

A
R
T
ich
C
e

P
D

F
/

D
Ö

ich
/

.

1
0
1
1
6
2

/
T

l

A
C
_
A
_
0
0
3
5
0
1
9
2
4
2
6
3

/

/
T

l

A
C
_
A
_
0
0
3
5
0
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3

Low-resource POS Tagging: Several cross-
lingual transfer techniques have been used for
improving low-resource POS tagging. Cotterell
and Heigold (2017) and Malaviya et al. (2018)
train a joint neural model on related high-
resource languages and find it be very effective
on low-resource languages. The main advantage
of these methods is that
they do not require
any parallel text or dictionaries. Das and Petrov
(2011), T¨ackstr¨om et al. (2013), Yarowsky et al.
(2001), and Nicolai and Yarowsky (2019) use
annotation projection methods to project POS an-
notations from one language to another. Wie-
immer, annotation projection methods use parallel
Text, which often might not be of good quality for
low-resource languages.

8 Abschluss

We have presented a novel active learning method
for low-resource POS tagging that works by
reducing confusion between output tags. Using
simulation experiments across six typologically
diverse languages, we show that our confusion-
reducing strategy achieves higher accuracy than
existing methods. Weiter, we test our approach
under a true setting of active learning where
we ask linguists to document POS information
for an endangered language, Griko. Despite be-
ing unfamiliar with the language, our proposed
method achieves performance gains over the other
methods in most iterations. For our next steps,
we plan to explore the possibility of adapting
our proposed method for complete morphological
Analyse, which poses an even harder challenge
for AL data selection due to the complexity of the
Aufgabe.

Danksagungen

to the anonymous
The authors are grateful
reviewers and the Action Editor who took the
time to provide many interesting comments that
made the paper significantly better, and to Eleni
Antonakaki and Irini Amanaki, for participating
in the human annotation experiments. Das
work is sponsored by the Dr. Robert Sansom
Fellowship, the Waibel Presidential Fellowship,
and the National Science Foundation under grant
1761548.

Verweise

Antonios Anastasopoulos, Marika Lekakou, Josep
Quer, Eleni Zimianiti, Justin DeBenedetto, Und
David Chiang. 2018. Part-of-speech tagging
on an endangered language: A parallel griko-
italian resource. In Proceedings of the 27th
International Conference on Computational
Linguistik, pages 2529–2539, Santa Fe, Neu
Mexiko, USA. Association for Computational
Linguistik.

Ankita and K. A. Abdul Nazeer. 2018. Part-of-
Speech Tagging and Named Entity Recognition
using Improved Hidden Markov Model and
Bloom Filter. In 2018 International Conference
on Computing, Power and Communication
Technologies (GUCON), pages 1072–1077.
IEEE.

Kedar Bellare and Andrew McCallum. 2007.
Learning extractors from unlabeled text using
relevant databases. In Sixth International Work-
shop on Information Integration on the Web.

Bernd Bohnet, Ryan McDonald, Gonc¸alo Sim˜oes,
Daniel Andor, Emily Pitler, and Joshua
Maynez. 2018. Morphosyntactic Tagging with
a meta-BiLSTM Model over Context Sensi-
tive Token Encodings. In Proceedings of the
56th Annual Meeting of the Association for
Computerlinguistik (Volumen 1: Long
Papers), pages 2642–2652, Melbourne, Aus-
tralia. Association for Computational Linguis-
Tics, DOI: https://www.aclweb.org
/anthology/P18-1246

In Proceedings of

Aditi Chaudhary, Jiateng Xie, Zaid Sheikh,
Graham Neubig, and Jaime Carbonell. 2019.
A Little Annotation does a Lot of Good: A
Study in Bootstrapping Low-resource Named
Die
Entity Recognizers.
2019 Conference on Empirical Methods in
Natural Language Processing and the 9th
International Joint Conference on Natural
Language
(EMNLP-IJCNLP),
pages 5164–5174, Hongkong, China. Asso-
ciation for Computational Linguistics. DOI:
https://doi.org/10.18653/v1/D19
-1520

Processing

Kevin Clark, Minh-Thang Luong, Christopher D.
Manning, and Quoc Le. 2018. Semi-Supervised

13

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
T

A
C
l
/

l

A
R
T
ich
C
e

P
D

F
/

D
Ö

ich
/

.

1
0
1
1
6
2

/
T

l

A
C
_
A
_
0
0
3
5
0
1
9
2
4
2
6
3

/

/
T

l

A
C
_
A
_
0
0
3
5
0
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3

Sequence Modeling with Cross-View Train-
ing. In Proceedings of the 2018 Conference
on Empirical Methods in Natural Language
Processing
1914–1925,
Brussels, Belgien. Association for Computa-
tional Linguistics.

(EMNLP),

Seiten

Ryan Cotterell

and Georg Heigold. 2017.
Cross-Lingual Character-Level Neural Mor-
Die
phological Tagging. In Proceedings of
2017 Conference on Empirical Methods in
Natural Language Processing
(EMNLP),
pages 748–759, Copenhagen, Denmark. Asso-
ciation for Computational Linguistics. DOI:
https://doi.org/10.18653/v1/D17
-1078

Dipanjan Das and Slav Petrov. 2011. Unsuper-
vised Part-of-Speech Tagging with Bilingual
Graph-Based Projections. In Proceedings of
the 49th Annual Meeting of the Association
für Computerlinguistik: Human Lan-
guage Technologies, pages 600–609, Portland,
Oregon, USA. Association for Computational
Linguistik.

Die 2019 Conference of

Jacob Devlin, Ming-Wei Chang, Kenton Lee,
and Kristina Toutanova. 2019. BERT: Pre-
training of Deep Bidirectional Transformers
for Language Understanding. In Proceedings
von
der Norden
the Association for
American Chapter of
Computerlinguistik: Human Language
Technologies, Volumen 1 (Long and Short
Papers),
4171–4186, Minneapolis,
Minnesota. Association for Computational
Linguistik.

Seiten

a Bilingual Dictionary.

Meng Fang and Trevor Cohn. 2017. Modell
Transfer for Tagging Low-Resource Languages
In Pro-
verwenden
ceedings of the 55th Annual Meeting of the
Verein für Computerlinguistik
(Volumen 2: Short Papers), pages 587–593,
Vancouver, Kanada. Association for Compu-
tational Linguistics. DOI: https://doi
.org/10.18653/v1/P17-2093

Dan Garrette

and Jason Baldridge. 2013.
Learning a Part-of-Speech Tagger from Two
Hours of Annotation. In Proceedings of the
2013 Conference of
the North American
Chapter of the Association for Computational

14

Linguistik: Human Language Technologies,
pages 138–147, Atlanta, Georgia. Association
für Computerlinguistik.

Harald Hammarstr¨om, Robert Forkel, and Martin
Haspelmath. 2018. Glottolog 3.3. Max Planck
Institute for the Science of Human History.
Jena.

Sheng-Jun Huang, Rong Jin, and Zhi-Hua Zhou.
2010. Active Learning by Querying Informa-
tive and Representative Examples. In Advances
in Neural Information Processing Systems,
pages 892–900.

Zhiheng Huang, Wei Xu, and Kai Yu. 2015.
Bidirectional LSTM-CRF models for Sequence
Tagging. arXiv preprint arXiv:1508.01991.

Melvin Johnson, Mike Schuster, Quoc V Le,
Maxim Krikun, Yonghui Wu, Zhifeng Chen,
Nikhil Thorat, Fernanda Vi´egas, Martin
Wattenberg, and Greg Corrado. 2017. Googles
Multilingual Neural Machine Translation
System: Enabling Zero-Shot Translation. Tran-
sactions of the Association for Computatio-
nal Linguistics, 5:339–351. DOI: https://
doi.org/10.1162/tacl a 00065

Christo Kirov, Ryan Cotterell,

John Sylak-
Glassman, G´eraldine Walther, Ekaterina
Vylomova, Patrick Xia, Manaal Faruqui,
Sebastian J. Mielke, Arya McCarthy, Sandra
K¨ubler, David Yarowsky,
Jason Eisner,
and Mans Hulden. 2018. UniMorph 2.0:
Universal Morphology. In Proceedings of the
11th Language Resources and Evaluation
Conference, Miyazaki, Japan. European Lan-
guage Resource Association.

In Proceedings of

Guillaume Lample, Miguel Ballesteros, Sandeep
Subramanisch, Kazuya Kawakami, and Chris
Dyer. 2016. Neural architectures for named
Die
entity recognition.
the North American
2016 Conference of
Chapter of the Association for Computational
Linguistik: Human Language Technologies,
pages 260–270, San Diego, CA. Associa-
tion for Computational Linguistics. DOI:
https://doi.org/10.18653/v1/N16
-1030

Marika Lekakou, Valeria Baldissera, and Anto-
nios Anastasopoulos. 2013. Documentation and

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
T

A
C
l
/

l

A
R
T
ich
C
e

P
D

F
/

D
Ö

ich
/

.

1
0
1
1
6
2

/
T

l

A
C
_
A
_
0
0
3
5
0
1
9
2
4
2
6
3

/

/
T

l

A
C
_
A
_
0
0
3
5
0
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3

analysis of an endangered language: aspects of
the grammar of griko.

Text

classification

David D. Lewis. 1995. Evaluating and optimizing
autonomous
Systeme.
In Proceedings of the 18th Annual Interna-
tional ACM SIGIR Conference on Research
und Entwicklung
in Information Retrieval,
pages 246–254. DOI: https://doi.org
/10.1145/215206.215366

Yu-Hsiang Lin, Chian-Yu Chen, Jean Lee, Zirui
Li, Yuyan Zhang, Mengzhou Xia, Shruti
Rijhwani, Junxian He, Zhisong Zhang, Xuezhe
Ma, Antonios Anastasopoulos, Patrick Littell,
and Graham Neubig. 2019. Choosing Transfer
Languages for Cross-Lingual Learning.
In
Proceedings of the 57th Annual Meeting of
the Association for Computational Linguistics,
pages 3125–3135, Florence, Italien. Association
für Computerlinguistik.

Xuezhe Ma and Eduard Hovy. 2016. End-
to-end Sequence Labeling via Bi-directional
Die
LSTM-CNNs-CRF.
54th Annual Meeting of the Association for
Computerlinguistik (Volumen 1: Long
Papers), pages 1064–1074. Association for
Computerlinguistik,

In Proceedings of

Chaitanya Malaviya, Matthew R. Gormley, Und
Graham Neubig. 2018. Neural Factor Graph
for Cross-Lingual Morphological
Models
Tagging. In Proceedings of the 56th Annual
Meeting of
the Association for Computa-
tional Linguistics (Volumen 1: Long Papers),
pages 2653–2663, Melbourne, Australia. Asso-
ciation for Computational Linguistics. DOI:
https://doi.org/10.18653/v1/P18
-1247

Diego Marcheggiani

and Thierry Arti`eres.
2014. An experimental comparison of active
learning strategies for partially labeled se-
quences. In Proceedings of the 2014 Conference
on Empirical Methods
in Natural Lan-
guage Processing (EMNLP), pages 898–906,
Doha, Qatar. Association for Computational
Linguistik. DOI: https://doi.org/10
.3115/v1/D14-1097

Arya D. McCarthy, Miikka Silfverberg, Ryan
Cotterell, Mans Hulden, und David Yarowsky.
2018. Marrying Universal Dependencies and

In Proceedings of
Universal Morphology.
the Second Workshop on Universal Depen-
dencies (UDW 2018), pages 91–101. DOI:
https://doi.org/10.18653/v1/W18
-6011

Garrett Nicolai and David Yarowsky. 2019.
Learning Morphosyntactic Analyzers
aus
the Bible via Iterative Annotation Projection
across 26 Languages. In Proceedings of the
57th Annual Meeting of the Association for
Computerlinguistik, pages 1765–1774,
Florence, Italien. Association for Computatio-
nal Linguistics. DOI: https://doi.org
/10.18653/v1/P19-1172

Joakim Nivre, Marie-Catherine De Marneffe,
Filip Ginter, Yoav Goldberg,
Jan Hajic,
Christopher D. Manning, Ryan McDonald,
Slav Petrov, Sampo Pyysalo, Natalia Silveira,
and Reut Tsarfaty, and Daniel Zeman. 2016.
Universal dependencies v1: A multilingual
Die
treebank collection.
Tenth International Conference on Language
(LREC’16),
Und
Ressourcen
pages 1659–1666.

In Proceedings of

Evaluation

Jeremy Nixon, Mike Dusenberry, Linchuan
Zhang, Ghassen Jerfel, and Dustin Tran. 2019.
Measuring calibration in Deep Learning. arXiv
preprint arXiv:1904.01685.

Eric Ringger, Peter McClanahan, Robbie Haertel,
George Busby, Marc Carmen, James Carroll,
Kevin Seppi, and Deryle Lonsdale. 2007.
Active Learning for part-of-speech Tagging:
Accelerating corpus annotation. In Proceed-
ings of the Linguistic Annotation Workshop,
pages 101–108. DOI: https://doi.org
/10.3115/1642059.1642075

Ozan Sener and Silvio Savarese. 2018. Active
Learning for Convolutional Neural Networks:
A Core-Set Approach. In International Con-
ference on Learning Representations.

Burr Settles. 2009, Active Learning literature
survey, University of Wisconsin-Madison
Department of Computer Sciences.

Burr Settles and Mark Craven. 2008. An Analysis
of Active Learning Strategies for Sequence
Labeling Tasks. In Proceedings of the 2008
Conference on Empirical Methods in Natural

15

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
T

A
C
l
/

l

A
R
T
ich
C
e

P
D

F
/

D
Ö

ich
/

.

1
0
1
1
6
2

/
T

l

A
C
_
A
_
0
0
3
5
0
1
9
2
4
2
6
3

/

/
T

l

A
C
_
A
_
0
0
3
5
0
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3

Seiten

Language Processing,
1070–1079,
Honolulu, HALLO. Association for Computational
Linguistik. DOI: https://doi.org/10
.3115/1613715.1613855

Aditya Siddhant, Ankur Bapna, Henry Tsai,
Jason Riesa, Karthik Raman, Melvin Johnson,
Naveen Ari, and Orhan Firat. 2020. In Evaluat-
ing the Cross-lingual Effectiveness of Massively
Multilingual Neural Machine Translation.
DOI: https://doi.org/10.1609/aaai
.v34i05.6414

Und

type

Oscar T¨ackstr¨om, Dipanjan Das, Slav Petrov,
Ryan McDonald, and Joakim Tagging. 2013.
Token
kreuzen-
lingual part-of-speech Tagging. Transactions
von
the Association for Computational Lin-
guistics, 1:1–12. DOI: https://doi.org
/10.1162/tacl a 00205

constraints

für

Nivre Joakim, Blokland Rogier, Partanen Niko,
Jack. 2018.

Rießler Michael, and Rueter
Universal Dependencies 2.3.

Ashish Vaswani, Noam Shazeer, Niki Parmar,
Jakob Uszkoreit, Llion Jones, Aidan N. Gomez,
Łukasz Kaiser, and Illia Polosukhin. 2017.
In Advances
Attention is all you need.
in Neural Information Processing Systems,
pages 5998–6008.

Zhe Wang, Xiaoyi Liu, Limin Wang, Yu Qiao,
Xiaohui Xie, and Charless Fowlkes. 2018.
Structured Triplet Learning with POS-tag
Guided Attention for Visual Question Ans-
wering. In 2018 IEEE Winter Conference on
Applications of Computer Vision (WACV),
pages 1888–1896, IEEE. DOI: https://
doi.org/10.1109/WACV.2018.00209

Zhilin Yang, Ruslan Salakhutdinov, and William
W. Cohen. 2017. Transfer learning for sequence
tagging with hierarchical recurrent networks.
arXiv preprint arXiv:1703.06345.

David Yarowsky, Grace Ngai, and Richard
Wicentowski. 2001. Inducing multilingual text
analysis tools via robust projection across
aligned corpora. In Proceedings of the First
International Conference on Human Language
Technology Research, pages 1–8. Associ-
ation for Computational Linguistics. DOI:
https://doi.org/10.3115/1072133
.1072187

Imed Zitouni and Radu Florian. 2008. Mention
detection crossing the language barrier. In
Proceedings of the Conference on Empirical
Methods in Natural Language Processing,
pages 600–609. Association for Computa-
tional Linguistics. DOI: https://doi.org
/10.3115/1613715.1613789

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
T

A
C
l
/

l

A
R
T
ich
C
e

P
D

F
/

D
Ö

ich
/

.

1
0
1
1
6
2

/
T

l

A
C
_
A
_
0
0
3
5
0
1
9
2
4
2
6
3

/

/
T

l

A
C
_
A
_
0
0
3
5
0
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3

16
PDF Herunterladen