Revisando la clasificación de relaciones de pocas posibilidades:

Revisando la clasificación de relaciones de pocas posibilidades:
Evaluation Data and Classification Schemes

Ofer Sabo1 Yanai Elazar1,2 Yoav Goldberg1,2 Ido Dagan1
1Computer Science Department, Bar Ilan University, Israel
2Allen Institute for Artificial Intelligence
{ofersabo,yanaiela,yoav.goldberg,ido.k.dagan}@gmail.com

Abstracto
We explore few-shot learning (FSL) for re-
lation classification (RC). Centrándose en el
realistic scenario of FSL,
in which a test
instance might not belong to any of the target
categories (none-of-the-above, [NOTA]), nosotros
first revisit the recent popular dataset struc-
ture for FSL, pointing out its unrealistic data
distribución. To remedy this, we propose a
novel methodology for deriving more realistic
few-shot test data from available datasets for
supervised RC, and apply it to the TACRED
conjunto de datos. This yields a new challenging bench-
mark for FSL-RC, on which state of the art
models show poor performance. Próximo, we ana-
lyze classification schemes within the popular
embedding-based nearest-neighbor approach
for FSL, with respect to constraints they im-
pose on the embedding space. Triggered by
this analysis, we propose a novel classifica-
tion scheme in which the NOTA category is
represented as learned vectors, shown empiri-
cally to be an appealing option for FSL.

1

Introducción

We consider relation classification—an important
sub-task of relation extraction—in which one is
interested in determining, given a text with two
marked entities, whether the entities conform to
one of pre-determined relations, or not. Mientras
supervised methods for this task exist and work
relatively well (Baldini Soares et al., 2019; zhang
et al., 2018; Wang y cols., 2016; Miwa and Bansal,
2016), they require large amounts of training data,
which is hard to obtain in practice.

We are therefore interested in a data-lean sce-
nario in which users provide only a handful of
training examples for each relation they are in-
terested in. This has been formalized in the ma-
chine learning community as few-shot learning
(FSL) (§2).

691

FSL for RC has been recently addressed by the
work of Han et al. (2018) and Gao et al. (2019),
who introduced the FewRel 1.0 and shortly after
the FewRel 2.0 challenges, in which researchers
are provided with a large labeled dataset of back-
ground relations, and are tasked with producing
strong few-shot classifiers: classifiers that will
work well given a few labeled examples of rela-
tions not seen in the training set. The task became
popular, with scores on FewRel 1.0 achieving
an accuracy of 93.9% (Baldini Soares et al.,
2019), surpassing the human level performance of
92.2%. Results on FewRel 2.0 are lower, en 80.3%
for the best system (Gao et al., 2019), but are still
very high considering the difficulty of the task.

Is few-shot relation classification solved? Nosotros
show that this is far from being the case. We argue
that the evaluation protocol in FewRel 1.0 is based
on highly unrealistic assumptions on how the mod-
els will be used in practice, and while FewRel
2.0 tried to amend it, its evaluation setup remains
highly unrealistic (§3.1). Por lo tanto, we propose a
methodology to transform supervised datasets into
corresponding realistic few-shot evaluation sce-
narios (§3.2). We then apply our transformation
on the supervised TACRED dataset (Zhang et al.,
2017) to create such a new few-shot evaluation set
(§3.3). Our experiments (§6.2) reveal that indeed,
moving to this realistic setup, el desempeño
of existing state-of-the-art (SOTA) models drop
importantemente, from scores of around 80 F1 (como
well as accuracy) to around 30.

A core factor in a realistic few-shot setup is
the NOTA (none-of-the-above) option; permitiendo
a case where a particular test instance does not
conform to any of the predefined target relations.
Triggered by presenting an analysis of possible
decision rules for handling the NOTA category
(§5), we propose a novel enhancement that mod-
els NOTA by an explicit set of vectors in the
embedding space (§5.2). This explicit ‘‘NOTA
as vectors’’ approach achieves new SOTA

Transacciones de la Asociación de Lingüística Computacional, volumen. 9, páginas. 691–706, 2021. https://doi.org/10.1162/tacl a 00392
Editor de acciones: Radu Florian. Lote de envío: 9/2020; Lote de revisión: 1/2021; Publicado 8/2021.
C(cid:2) 2021 Asociación de Lingüística Computacional. Distribuido bajo CC-BY 4.0 licencia.

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
3
9
2
1
9
5
5
1
5
5

/

/
t

yo

a
C
_
a
_
0
0
3
9
2
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

performance for the FewRel 2.0 conjunto de datos, y
outperforms other models on our new dataset
(§6). Todavía, the realistic scenario of our TACRED-
derived dataset remains far from being solved,
calling for substantial future research. We release
our models, datos, y, more importantly, our data
conversion procedure, to encourage such future
trabajar.

2 Task Setup and Formulation

2.1 Relation Classification

The relation extraction (RE) task takes as input
a set of documents and a list of pre-specified
relaciones, and aims to extract tuples of the form
(e1, e2, r) where e1 and e2 are entities, r is a
relation that holds between them (r belongs to a
pre-specified list of relations of interest). This task
is often approached by a pipeline that generates
candidate (e1, e2, s) triplets, classifies each one
to a relation (or indicates there is no relation).
The classification task from such triplets to an
expressed relation is called relation classification
(RC). It is often isolated and addressed on its own,
and is also the focus of the current work. zhang
et al. (2017) demonstrate that improvements in
RC carry over to improvements in RE.

In the RC task each input xi = (e1, e2, s)i
consists of a sentence s with a (ordenado) pair of
marked entities (each entity is a span over s), y
the output is one of |R| + 1 classes, indicando
that the entities in s conform to one of the rela-
tions in a set R of target relations, or to none
de ellos. We refer to a triplet xi as a relation
instancia. Por ejemplo, if the target relations are
R = {Owns, WorksFor}, the relation instance
‘‘Wired reports that in a surprising reshuffle at
Microsofte2, Satya Nadellae1 has taken over as the
managing director of the company.’’ should be
classified as WorksFor. The same sentence with
the entity pair e1 =Satya Nadella and e2 =Wired
should be classified as ‘‘NoRelation’’ (NOTA).

2.2 The Few-Shot N-Way K-Shot Setup

As supervised datasets are often hard and expen-
sive to obtain, there is a growing interest in the
few-shot scenario, where the user is interested in
|R| target-relations, but can provide only a few
labeled instances for each relation. En este trabajo,
we follow the increasingly popular N-Way K-Shot
setup of FSL, proposed by Vinyals et al. (2016)
and Snell et al. (2017). This setup was adapted

The N-Way K-Shot

to relation classification, resulting in the FewRel
and FewRel 2.0 conjuntos de datos (Han et al., 2018; gao
et al., 2019). We further discuss the datasets in §3.
eso
is interested in N target
relaciones
the user
(Rtarget = {c1, . . . , cN }), and has access to
K instances (typically few) of each one, called
the support set for class cj, denoted by σ:

setup assumes

σ = {σc1, . . . , σcN
σcj = {x1, . . . , xk} s.t. r(xi) = cj

cj ∈ Rtarget

}

where r(X) is the gold relation of instance x; σcj
is the support set for relation cj; and σ is the
support set for all N relations in Rtarget.

A set of target relations and the corresponding
support sets is called a scenario. Given a scenario
S = (Rtarget, pag), our goal is to create a decision
function fS(X) : x → Rtarget ∪ { }, dónde
⊥ indicates ‘‘none of the relations in Rtarget’’.
Let X = x1, . . . , xm be a set of instances with
corresponding true labels r(x1), . . . , r(xm), nuestro
aim is to minimize the average cumulative eval-
metro
uation loss 1
yo=1 (cid:3)(fS(xi), r(xi)), dónde (cid:3) es un
metro
per-instance loss function, usually zero-one loss.
When treating FSL as a transfer learning prob-
lem, as we do here, there is also a background set
of relations Rbackground, disjoint from the target
relation set, for which there is plenty of labeled
data available. This data can also be used for
constructing the decision function.

(cid:2)

The performance of an N-Way K-Shot FSL
algorithm on a dataset X is highly dependent on
the specific scenario S: Both the choice of the
the N relations that needs to be distinguished as
well as the choice of the specific K examples
for each relation can greatly influence the results.
In a real-life scenario, the user is interested in a
specific set of relations and examples, but when
developing and evaluating FSL algorithms, nosotros
are concerned with the expected performance of
a method given an arbitrary set of categories and
metro
examples: ES[ 1
yo=1 (cid:3)(fS(xi), r(xi))] cual
metro
can be approximated by averaging the losses for
several random scenarios Sj, each varying the
relation set and the example set. In a practical
evaluación, the number of N-Way K-Shot scenar-
ios that can be considered is limited, relative to
the combinatorial number of possible scenarios.
To maximize the number of considered scenarios,
we re-write the loss to consider expectations also
over the data points: ESE(X)∼X [(cid:3)(fS(X), r(X))].

(cid:2)

692

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
3
9
2
1
9
5
5
1
5
5

/

/
t

yo

a
C
_
a
_
0
0
3
9
2
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

This gives rise to an evaluation protocol that
considers the loss over many episodes, dónde
each episode is composed of: (1) a random choice
Rtarget of N distinct target relations Rtarget =
{c1, . . . , cN }, ci (cid:8)= cj; (2) a corresponding ran-
} of N ∗ K
dom support set σ = {σc1, . . . , σcN
instancias (K instances in each σcj ); y (3) a
single randomly chosen labeled example consid-
ered as a query, (X, r(X)), which does not appear
in the support set. To summarize, an evaluation set
for N-Way K-Shot FSL is a set of episodes, cada
consisting of a N target relations, K supporting
examples for each relation, and a query. For each
episode, the algorithm should classify the query
instance to one of the relations in the support set,
or none of them.

En la práctica, the episodes in an evaluation set
are obtained by sampling episodes from a labeled
conjunto de datos. As we discuss in the following section, el
specifics of the labeled dataset and the sampling
procedure can greatly influence the realism of the
evaluación, and the difficulty of the task.

2.3 Low-resource Relation Classification —

Trabajo relacionado

Other than FSL, several setups for investigat-
ing RC under low resource setting have been
propuesto.

Obamuyide and Vlachos (2019) experimented
with limited supervision settings on TACRED.
Their setting is different than the transfer-based
few-shot setting addressed in our paper, sin embargo.
In most of their experiments the amount of train-
ing instances per relations is much higher, no
fitting the ad hoc nature of the few-shot setting.
Más, they train a model on all classes, not add-
ressing inference on new class types at test time.
Distant supervision is another approach for
handling low-resource RC (Mintz et al., 2009).
This approach leverages noisy labels for training
a model, produced by aligning relation instances
to a knowledge-base. Particularly, it considers
sentences containing a pair of entities holding
a known relation as instances of that relation.
Por ejemplo, a sentence containing the entities
‘Barack Obama’, and ‘Hawaii’ will be labeled as
an instance of the born in relation between these
entidades, even though that sentence might describe,
Por ejemplo, a later visit of Obama to Hawaii.

Finalmente, another line of work is the Zero-Shot
setup, where the RC task is reduced to another

inference task, leveraging trained models for that
tarea. Específicamente, Levy et al. (2017) propuesto
a method that leverages reading comprehension
modelos, while Obamuyide and Vlachos (2018)
suggest using textual entailment models.

3 Desired Versus Existing Few-Shot
Relation Classification Datasets

A FSL system is intended to be used in a real-life
scenario. De este modo, evaluation procedures for FSL
should attempt to mimic the conditions under
which the FSL system will be applied in practice.
In a realistic FSL scenario, the user has a set of
relations of interest (‘‘target relations’’), and can
come up with a handful of examples for each.
The relations in the set are often related to each
otro. The user may potentially have access to
a labeled dataset of a different set of relations
(‘‘background relations’’), which they may want
to use to train, or to improve, their FSL system.

The resulting classifier will then be applied
to unlabeled data aiming to detect new target
relaciones, en el cual, de modo realista:

(a) some relations are rarer than others.
(b) most instances do not correspond to a target

relation.

(C) many instances may not correspond also to a

background relation.

(d) relation instances may include named
entidades, as well as pronouns and common
noun entities.

Idealmente, the episodes in an FSL evaluation should
be chosen in a way that reflects (a)–(d) above.1
The first characteristic (a) naturally follows the
non-uniform distribution of relation types in a
(non-artificial) text collection. The second point
(b) stems from the fact that a natural text refers
to a broad, inherently unbound, range of relation
types, while in an RC setting, particularly for FSL,
there is typically a restricted set of target rela-
ciones. Similarmente, while available RC training sets
(for the supervised setting) may annotate more
relation types than in a typical few-shot setting,
they still contain a limited number of relation
types in comparison to the full range of relations
expressed in the corpus. This is prototypically
evidenced in the naturally distributed RC dataset

1Additional concern of a realistic setup, which we do not
consider in this work, is the accuracy of the entity-extractor
that marks entity boundaries and assigns entity types, previo
to the RC setup.

693

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
3
9
2
1
9
5
5
1
5
5

/

/
t

yo

a
C
_
a
_
0
0
3
9
2
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Dataset
TACRED

Tren
desarrollador
13,012 5,436

Prueba
3,325 (78.56%)

FS TACRED 8,163

633

804 (94.81%)

Mesa 1: Number of relation instances in the
original TACRED dataset and in our derived Few-
Shot TACRED. The corresponding test set NOTA
rates appear in parenthesis.

TACRED (§3.3), dónde 78.56% of the labels
are NOTA (Mesa 1). Finalmente, naturally occurring
textual relations may be used to relate named
entities as well as common nouns or pronouns
(d); por lo tanto, we expect the annotated RC dataset
entities to include all such entity types.

As we show below, existing FSL-RC datasets
do not conform to these properties, Resultando en
artificial—and substantially easier—classifica-
tion tasks. This in turn leads to inflated accuracy
numbers that are not reflective of the real potential
performance of a system. We propose a refined
sampling procedure that adheres to the realis-
tic setting, and results in a substantially more
realistic evaluation set, while conforming to the
same N-Way K-Shot protocol. As we show in
the experiments section (§6), this setup proves
to be substantially more challenging for existing
algoritmos. We propose to use this procedure
for future evaluation of FSL-RC algorithms, y
release the corresponding code and data.

3.1 Existing FSL-RC Datasets

An N-Way K-Shot RC dataset was introduced by
Han et al. (2018), called FewRel 1.0. The dataset
became popular, yet proved to be rather easy: El
current best leaderboard entry by Baldini Soares
et al. (2019) obtains results of over 93.86% acumular-
racy for 5-way 1-shot, above the 92% exactitud
of human performance. The dataset was then
updated to FewRel 2.0 (Gao et al., 2019), using an
updated episode sampling procedure (see below),
with the current best system obtaining a 5-way
1-shot score of 80.31 (Gao et al., 2019).

Underlying Labeled Data Both FewRel ver-
sions are based on the same underlying labeled
dataset containing 100 distinct relations, con
700 instances per relation, totalling in 70,000
labeled instances. The sentences are based on
Wikipedia and the entities and relation labels are

assigned automatically using Wikidata, seguido
by a human verification step.

Note that while extensive, each relation type
contains the same number of instances, a pesar de todo
of any real truthful distribution in a corpus, re-
sulting in a highly synthetic dataset, contradict-
ing the realistic assumption (a) arriba. A diferencia de,
instances in supervised RC datasets such as
TACRED and DocRED (Zhang et al., 2017; Yao
et al., 2019) do respect the relation distribution in
a real corpus.

target entities are mostly
Finalmente, FewRel
including important entity
named entities, no
types such as pronouns and common nouns, cual
are present in supervised RC datasets (incluido
TACRED), thus contradicting assumption (d).

Train/Dev/Test Splits The 100 relations are
split into three disjoint sets, Rtrain, Rdev, y
Rtest, consisting of 64, 16, y 20 relaciones,
respectivamente. The relations in Rtrain and their
corresponding instances are used as the labeled
corpus of background relations, while evaluation
episodes consist of relations in either Rdev or
Rtest. We refer to this set (either test or dev) como
Reval. Each episode consists of random subset
Rtarget ⊂ Reval.

Sampling Procedures The episode sampling
procedure of FewRel 1.0 works by sampling
N relations from Reval resulting in a target set
Rtarget, sampling a corresponding size k support
set σcj for each cj ∈ Rtarget, and then sampling
a query example in which r(q) ∈ Rtarget. Eso
es, the query in each episode is guaranteed to
be in Rtarget. This setup is artificial, negating
realistic condition (b) arriba. This explains the
high performance on FewRel 1.0.

NOTA Following the aforementioned observa-
ción, the FewRel 2.0 work introduced a NOTA
scenario. Aquí, after sampling the target relation
set Rtarget ⊂ Reval, the query class r is sam-
pled from Rtarget with probability p and from
Reval \ Rtarget with probability 1 − p. Eso es,
1 − p of the episodes contain a query for which
the answer does not correspond to any support
colocar, in which case the answer is NOTA.

Although a step in the right direction (en efecto,
results in this setup drop from over 90% to around
80%), this setup is still highly unrealistic: not only
all the NOTA instances are guaranteed to be valid
relaciones, they also always come from the same

694

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
3
9
2
1
9
5
5
1
5
5

/

/
t

yo

a
C
_
a
_
0
0
3
9
2
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

small set, contradicting assumption (C). In a real-
istic setup, we would expect the vast majority of
test instances to be NOTA, but the set of NOTA
instances is expected to vary greatly: algunos de
them will correspond to relations from the back-
ground relations, some of them will correspond
to unseen relations, and many will not correspond
to any concrete relation. Además, algunos de
the NOTA cases will appear in sentences that do
contain a target relation, but between different
entidades. Supervised relation extraction and rela-
tion classification datasets reflect this situation,
and we argue that the FSL evaluation sets should
also do so.

3.2 Better FSL-RC Evaluation Sets

We propose a methodology for transforming
a supervised RC dataset
into a few-shot RC
conjunto de datos, while attempting to maintain properties
(a)–(d) of the realistic evaluation scenario. Este
methodology can be applied to existing and future
supervised datasets, thus reducing the need of
collecting new dedicated FSL datasets.

3.2.1 Realistic Underlying Labeled Data
We assume a given supervised dataset, with C cat-
egories, divided into train and test sections, dónde
each section contains all C categories, with dis-
tinct instances in each section (the typical setting
for supervised multi-class classification). Alguno
instancias (in all sections) may be labeled with
‘‘None-of-the-above’’ (also known as ‘‘other’’ in
the classic supervised setting, or ‘‘no relation’’ in
TACRED terminology), hereafter NOTA, significar-
ing these instances do not belong to any of the C
categories.

Transformation We transform the supervised
conjunto de datos
into an FSL dataset containing (as in
FewRel) a set of background relations for training
and a disjoint set of relations for evaluation. A
perform this transformation, we begin by choosing
M categories as Reval.2 The remaining C − M
categories are designated as background relations
Rtrain.3 We now keep the same instance-level
train/dev/test splits of the original supervised
conjunto de datos, but relabel the instances in each section:
train set instances whose labels are in Rtrain

2In practice we have MT categories for test and MD for

desarrollador, we refer to both as eval for brevity.

3To preform the N-Way K-Shot setup, M is required to
be larger than N ; in case the original data has a NOTA label,
M may be equal to N .

retain their original labels, while all other training
instances are labeled as NOTA. Similarly for the
test and dev splits. This results in sets where each
set has distinct labels, but some of the NOTA
instances in one set correspond to labels in other
conjuntos.

Multiple Splits The choice of relations for each
set influences the resulting dataset: Some rela-
tions are more similar to each other than others,
and splits that put several similar relations in an
eval set are harder than splits in which similar
relations are split between the train an eval sets.
Además, as the number of labeled instances for
each relation differ, splitting by relation results in
different number of train/dev/test instances. Nosotros
thus repeat the process several times, each time
with a distinct set of eval relations.

3.2.2 Realistic Episode Sampling
To create an episode, we first sample the N ∗ K
instancias, which constitute the N support set as in
previous episodic sampling: Sample N out of M
relaciones, and then sample K instances for each
relation from the underlying eval set. Sin embargo,
the query for the episode is then sampled uni-
formly from all remaining instances in the eval
colocar. If the label of the instance chosen as query
differs from the N target relations in the episode,
it is labeled as NOTA. This query sampling pro-
cedure maintains both the label distribution and
NOTA rate of the underlying supervised dataset.

3.3 Few-Shot TACRED: Realistic Few-Shot

Relation Classification

We apply our transformation methodology to
the TACRED RC dataset (Zhang et al., 2017).
The TACRED dataset was collected from a news
cuerpo, purposing extracting relations involving
100 target entities. Respectivamente, each sentence
containing a mention of one of these target en-
tities was used to generate candidate relation
instances for the RC task. The relation label was
annotated as one of 41 pre-defined relation cat-
egories, when appropriate, or into an additional
‘‘no relation’’ category. The ‘‘no relation’’ cat-
egory corresponds to cases where some other
relation type holds between the two arguments, como
well as cases in which no relation holds between
a ellos, where we consider both types of cases as
falling under our NOTA category.

695

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
3
9
2
1
9
5
5
1
5
5

/

/
t

yo

a
C
_
a
_
0
0
3
9
2
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

4 Fondo: Prior Few-Shot

RC Models

As mentioned earlier, only a handful of examples
are provided for the target classes in the Few-
Shot setting. It is therefore quite challenging to
utilize these examples effectively for learning or
updating model parameters. Como consecuencia, bastante
a few existing few-shot models, in the machine
learning literature as well as in NLP (Vinyals et al.,
2016; Ravi and Larochelle, 2017; Baldini Soares
et al., 2019), perform a representation learning
phase (typically known as embedding learning or
metric learning), followed by nearest neighbor
clasificación. Aquí, model parameters are first
learned over the background classes, para cual
substantial training is available. Entonces, classifica-
tion of test instances is based on the trained model,
with the hope that this model would generalize
reasonably well for the target classes.4

In the nearest neighbor approach, clasificación
is done via a scoring function score(q, ci), cual
assigns a score for a query instance, q, y un
target class, ci. Because the class is represented
by its Support Set, σci, the scoring function can
be a similarity function between the query and the
class’s support set:

puntaje(q, ci) (cid:2) sim(q, σci)

Most often, an embedding-based approach is taken
to compute similarity, decomposing the process
into two separate components (Snell et al., 2017;
Baldini Soares et al., 2019; Le et al., 2019). Primero,
instances are embedded into an explicit, típicamente
dense, vector space, by an embedding function.
Entonces, query-support similarity is measured over
embedded vectors. Específicamente, the prototypical
network of Snell et al. (2017) represents a target
class ci by a class prototype vector μi, cual es
the average embedding of the K instances in the
support set of the class. The similarity between
the query and each support set, sim(q, σci), es
then measured as the similarity between the query

4While in the remainder of this paper we focus on this
similarity-based approach, it is worth noting that there exist
other approaches for FSL, which further utilize the few
labeled support examples. These include data augmentation
methods, which generate additional examples based on the
few initial ones, as well as optimization-based methods
(Ravi and Larochelle, 2017; Finn et al., 2017), donde el
model does utilize the small support sets of the target classes
for parameter learning. Integrating our contributions with
these approaches is left for future work.

Cifra 1: Relation distribution across episodes in our
newly derived Few-Shot TACRED and the existing
FewRel 2.0 RC task. On the left side we demonstrate the
relations distribution in Few-Shot TACRED episodes,
which follows a real-world distribution. On the right,
we present the relations distribution in FewRel 2.0,
which is synthetic. The y-axis for both figures is in
log scale. Few-Shot TACRED NOTA’s proportion is
97.5% while in FewRel 2.0 es 50%.

We choose M = 10 del 41 relations for the
test set, and divide the remaining 31 relations into
25 y 6 for training and development, respetar-
activamente, and release this split for future research.
Mesa 1 lists the respective number of train/dev/
test instances in our Few-Shot TACRED, a lo largo de
with the resulting NOTA rate in the test instances,
as well as the corresponding numbers for the
original TACRED dataset. As we expected, en
a typical few-shot setting over natural text (como
in Few-Shot TACRED, unlike FewRel), dónde
the number of the targeted classes (N-way) es
pequeño, most instances would correspond to the
NOTA case. This is indeed illustrated in Table 1,
where the original TACRED dataset includes 41
target classes, vs. 10 in Few-Shot TACRED, y
hence have a lower NOTA rate (conversely, en un
5-way setting, the NOTA rate is even higher, ver
Cifra 2).

Evaluation Sets For evaluation, we consider
sets of 150,000 episodes, sampled according to
the procedure above. For robustness, we create 5
evaluation sets of 30,000 episodes each, and report
the mean and STD scores over the 5 conjuntos. Cifra 1
(shown in §1) presents the distribution differences
between Few-Shot TACRED and FewRel 2.0
episodes. As we show in Section 6, the Few-Shot
TACRED evaluation set proves to be a substantial
challenge for Few-Shot algorithms.

696

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
3
9
2
1
9
5
5
1
5
5

/

/
t

yo

a
C
_
a
_
0
0
3
9
2
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

and the corresponding prototype vector, assuming
some similarity function between vectors in the
embedding space:

sim(q, σci) (cid:2) sim(q, μi)

This approach was adopted in the state-of-the-art
método (Baldini Soares et al., 2019) for few-shot
RC (FewRel 1.0, excluding the NOTA category),
as well as by several other works for FSL in NLP
(Bao et al., 2020a; Yu et al., 2018).

Nearest-neighbor Classification Rule Simi-
larity is computed between a test instance and
each support set, selecting the nearest class:

fS(q) = arg máx

cj

sim(q, σcj )

Instance Representation Baldini Soares et al.
(2019) further conducted an empirical analysis
of embedding functions for few-shot RC. Su
most effective embedding method augments the
sentence with special tokens at the beginning and
at the end of each of the two entities of the rela-
tion instance. The instance representation is then
obtained by concatenating the two correspond-
ing start tokens from BERT’s last layer (Devlin
et al., 2019). En nuestros experimentos, we adopt this
embedding function, denoted BERTEM (BERT-
based Entity Marking), as well as the use of dot
product as the vector similarity function (after we
reassessed its effectiveness as well).

4.1 FewRel 2.0 BERT Sentence-pair Model

The FewRel 2.0 work presented a model for the
NOTA setting, which skips the embedding learn-
ing phase (Gao et al., 2019). En cambio, it utilizes
the embeddings-based next sentence prediction
score of BERT (Devlin et al., 2019), as the simi-
larity score between a query and each support set
instancia. Entonces, similarly to the approach described
arriba, a nearest-neighbor criterion is applied over
the average similarity score between the query
and all support instances of each class. A paral-
lel scoring mechanism is implemented to decide
whether the NOTA category should be chosen.

4.2 Related FSL Classification Models

In this section we first review some prominent
FSL work addressing other machine-learning
tareas. Además, we compare between the no-
tions of Out-Of-Domain (OOD) detection and
NOTA detection.

In a recent work on FSL, Tseng et al. (2020) aim
to improve generalization abilities by providing
supervision for the category transfer phase. En
their learning setting, the classes of each training
episode are divided into two subsets, the first acts
as the ‘‘typical’’ training set while the second sim-
ulates the test set. To improve generalization they
add an additional encoding layer that is optimized
to maximaize performance on the simulated test
categories.

Another recent FSL work, addressing text
clasificación, suggests weighing words by their
frequency over the training set (Bao et al., 2020b).
The model uses two components to classify the
given text into one of the episode’s categories. El
first component computes the inverse frequency
of each support set token over the training set. El
second component estimates the inductive level of
support set tokens with respect to classification.
Finalmente, the output of these two components is
used to train a linear classifier, by which the query
is classified.

Out-Of-Domain Detection The essence of the
NOTA category resembles OOD detection, como
in both cases the goal is to detect instances not
falling under the known categories. Tan et al.
(2019) define the OOD classes as the set of all
classes that were not part of the training classes
(vs. NOTA, which means that none of the given
support classes in an episode is present). In their
trabajar, the authors suggest a representation learning
approach for OOD detection in text classifica-
ción. Their method combines hinge loss with the
classic cross-entropy loss function. The former is
used to push away the representation of the OOD
instancias, while the latter is used to learn correct
classification within the in-domain classes.

5 Classification Rules: Análisis

and Extension

En esta sección, we provide an analytic perspective
on the bias that different nearest-neighbor classi-
fication rules impose on the learned embedding
espacio. We start with an analysis of the classifica-
tion rule for the basic few-shot RC setting, sin
the NOTA category, as was applied in prior work
(Sección 4). This analysis follows directly the
constraint presented in the influential work of
Weinberger and Saul (2009), and utilized in
subsequent work (p.ej., Shen et al., 2010; Dhillon
et al., 2010). We then extend this analysis to the

697

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
3
9
2
1
9
5
5
1
5
5

/

/
t

yo

a
C
_
a
_
0
0
3
9
2
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

setting that does include the NOTA category.
Primero, we analyze the straightforward threshold-
based approach for this setting. Entonces, inspirado por
this analysis, we propose an alternative approach,
with a corresponding constraint, que representa
the NOTA category by one or more explicit
learned vectors. As shown in subsequent sections,
this new approach performs consistently better
than other methods on both the FewRel 2.0 y
our new Few-Shot TACRED benchmarks, y
is thus suggested as an appealing approach for
few-shot Relation Classification.

5.1 Constraints Imposed by

Nearest-neighbor Classification

Classification without NOTA As described
earlier, the nearest neighbor approach assigns a
query instance to the class of its nearest support
colocar. We start our analysis by adapting inequality
(10) from Weinberger and Saul (2009), cual
was introduced to formulate the training goal for
metric learning in k-nearest neighbor classifica-
tion.To this end, we adapt the original inequality
to our nearest-neighbor few-shot classification
configuración (Sección 4). The obtained inequality below
specifies the necessary and sufficient constraints
that the embedding space, along with the sim-
ilarity function over it, should satisfy in order
to reach perfect classification, over all possible
episodes in a given dataset.5 For every possible
query instance q, a support set σr(q) from the same
class as q and a support set σ¬r(q) for a different
class, the following constraint should hold:

∀ q, σr(q), σ¬r(q)

sim(q, σr(q)) > sim(q, σ¬r(q))

(1)

Eso es, to achieve perfect classification, each pos-
sible relation instance q imposes that support sets
of different classes should be positioned further
away from it (being less similar) than the most
distant support set it might have from its own
class. Generally speaking, the nearest neighbor
classification rule implies that instances that are
rather close to their class mates may also be rather
close to other classes, while instances that are far
from their class mates should also be positioned
at least as far from other classes.

5Notice that we drop the margin element in the adapted
inequality, as it is not needed for the analytic purpose of our
constraint.

In the few-shot setting, the embedding function
is learned during training, over the training cate-
gories. As the learning process tries to optimize
classification on the training set, it effectively
attempts to learn an embedding function that
would satisfy the above constraint as much as
posible. En efecto, we often observed almost per-
fect performance over the training data, indicando
eso, for the training instances, this constraint is
mostly satisfied by the learned embedding func-
ción. Todavía, while it is hoped that the embedding
function would separate properly also instances of
nuevo, previously unseen, classes, in practice this
holds to a lesser degree, as indicated by lower test
actuación.

Thresholded Classification with NOTA When
the NOTA option is present, the nearest neighbor
classification rule can be naturally augmented
by assigning the NOTA category to test queries
whose similarity to all of the target classes does
not surpass a predetermined (possibly learned)
límite, i. Extending our analysis to such clas-
sification rule, to achieve perfect classification,
the embedding space must fulfil the following,
necessary and sufficient constraint, whose left-
hand-side is relevant only for episodes that include
a support set for the query’s class:

∀ q, σr(q), σ¬r(q)

sim(q, σr(q)) > θ > sim(q, σ¬r(q))

(2)

Since the same threshold is applied to all queries,
to achieve perfect classification in this setting θ
should be smaller than all within-class similari-
corbatas, for any possible pair of query q and a support
set of its class σr(q). Concurrently, it should be
larger than all cross-class similarities, para cualquier
possible query q and a support set of a different
class σ¬r(q).6

We observe that Inequality (2) imposes a global
constraint over the embedding space. It implies
that the degree to which all classes should be
separated from each other is imposed, globally,
by those queries in the entire space which are the
furthest away from their own class support sets.
Respectivamente, it requires all classes to be positioned
equally far from each other, regardless of their own
‘‘compactness’’. This makes a much harsher con-
straint, and challenge for the embedding learning,
than Inequality (1), which allows certain classes to
be nearer if their within-class similarities are high.

6Proof provided in the Appendix.

698

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
3
9
2
1
9
5
5
1
5
5

/

/
t

yo

a
C
_
a
_
0
0
3
9
2
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

5.2 NOTA As a Vector (NAV)

Motivated by the last observation, proponemos un
alternative classification approach for few-shot
classification with the NOTA category. En esto
acercarse, we represent the NOTA category by an
explicit vector in the embedding space, denotado
VN , which is learned during training. en el momento de la prueba,
the similarity between the query q and this vector,
sim(q, VN ), is computed and regarded as the sim-
ilarity between the query and the NOTA category:

sim(q, NOTA) (cid:2) sim(q, VN )

Entonces, q is assigned to its nearest class, by the
usual nearest-neighbor classification rule. De este modo,
the NOTA class is selected if sim(q, VN ) is higher
than q’s similarity to all target classes. Effectively,
this mechanism considers an individual NOTA
classification threshold for each query, a saber
sim(q, VN ), which depends on q’s position in the
embedding space relative to VN . We term this
approach ‘‘NOTA As a Vector’’ (NAV).

Classification under the NAV scheme implies
the following constraint on the embedding space,
considering perfect classification:7

∀ q, σr(q), σ¬r(q)

sim(q, σr(q)) > sim(q, VN ) > sim(q, σ¬r(q))
(3)

This constraint implies that, to achieve perfect
clasificación, the similarity between a query and
the NOTA vector VN should be smaller than q’s
similarity to all possible support sets of its own
class, while being larger than its similarity to all
support sets of other classes. In comparison to the
prior classification rules, this approach does allow
instances that are rather close to their class mates
to be closer to other classes than instances that are
positioned further from some of their class mates,
similarly to the lighter constraint in Inequality (1).
Todavía, to enable such ‘‘geometry’’ of the embedding
espacio, it is also required that instances would be
positioned appropriately relative to the NOTA
vector, in a way that satisfies the two constraints
in Inequality (3). Using the NAV approach, es
hoped that the learning process would position the
NOTA vector, and adjust the embedding parame-
ters, such that these constraints would be mostly

7Notice the analogous structure of Inequalities 2 y
3, where sim(q, VN ) replaces the role of θ. A similar
correctness proof applies.

satisfied. En general, the NAV approach imposes
different constraints on the similarity space than
using a single global classification threshold for
the NOTA category (as in Inequality (2)), and it is
not clear apriori which approach would be more
effective to learn. This question is investigated
empirically in Section 6.

5.3 Multiple NOTA Vectors

A natural extension of the NAV approach, denotado
as MNAV, is to represent the NOTA category by
multiple vectors, whose number is an empirically
tuned hyper-parameter. During classification, el
model picks the closest vector to the query as
VN , which accordingly defines sim(q, NOTA).
Entonces, classification is determined as in the NAV
método, where adding multiple NOTA vectors is
expected to effectively ease the embedding space
constraints. En la práctica, we treat the number of
NOTA vectors as a hyperparameter.

5.4 Training Procedure

For training, we use the same episode sampling
procedure that generated the dev/test sets, pero
where the target relations are sampled from a set of
train relations, disjoint from the dev/test relations.
We define an epoch to include a fixed number
of episodes, considered a tuned hyper-parameter,
independently sampling episodes for each epoch.
We measure dev set performance after each epoch,
and use early stopping. For each episode E =
(Rtarget, {σc1, . . . , σcN }, q), we encode the query
using BERTEM encoding function (Baldini Soares
et al., 2019), described in §4, (cid:5)q = BERTEM (q)
and similarly for each item x in each support set,
obtaining for each σcj the corresponding average
prototype vector (cid:5)μj = 1
k

BERTEM (X).

(cid:2)

x∈σcj

We define the prototype of the NOTA class to
be the learned NAV vector: (cid:5)μ⊥ = (cid:5)vN . Our loss
term for each episode considers (cid:5)q and the proto-
type vectors (cid:5)μi and tries to optimize Inequality
(3): dot((cid:5)q, (cid:5)μr(q)) > dot((cid:5)q, (cid:5)μ⊥) > dot((cid:5)q, σ¬r(q)).
Concretely we use cross-entropy loss, as used in
previous work (Baldini Soares et al., 2019):

− log

(cid:2)

edot((cid:3)q,(cid:3)μr(q))
i∈Rtarget∪ { } edot((cid:3)q, (cid:3)μi)

Note that this works towards satisfying the condi-
tions in Inequality (3): in episodes where r(q) (cid:8)=⊥,
the loss attempts to increase the first term in
Inequality (3) (the similarity between the query

699

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
3
9
2
1
9
5
5
1
5
5

/

/
t

yo

a
C
_
a
_
0
0
3
9
2
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

and the prototypical vector of its class), mientras
decreasing the similarity of the two other terms
(the similarity between q and all other prototypical
vectores,
including the NAV one). In particu-
lar, it drives towards satisfying sim(q, σr(q)) >
sim(q, VN ). In episodes where r(q) =⊥, the loss
increases the second term, decreasing the similar-
ity in the third term, driving towards satisfying
sim(q, VN ) > sim(q, σ¬r(q)). Analogously, el
same dynamics apply when the learned (escalar)
threshold value determines the NOTA score.

Following Weinberger and Saul (2009), OMS
derived a triplet loss objective, and similar to
subsequent lines of work (p.ej., Schroff et al.,
2015; Hoffer and Ailon, 2015; Ming et al., 2017),
we experimented also with adapted versions of
triplet loss. Under this objective, instances not
belonging to the same class are pushed away while
same-class instances are pulled together, aiming
to reach the desired ordering as in Inequalities
(2) y (3). We tried multiple variants of this
objective for FSL training, including objective
versions with a margin element, but these exper-
iments resulted in consistently lower results than
the methods described above.

NOTA Vector Initialization For
the NAV
método, we straightforwardly initialized the sin-
gle NOTA vector randomly. Random initialization
of the multiple NOTA vectors in MNAV evolved
to a single vector being dominantly picked as
the NAV vector by the MNAV decision process.
Como consecuencia, results were very similar to the
(single vector) NAV model. Presumiblemente, this hap-
pened because a single random vector turned out
to be closest to the sub-space initially populated
by the pre-trained BERTEM embedding function.
To avoid this, we wish to scatter all the initial
vectors within the initially populated subspace. A
this end, we initialize a NOTA vector by sampling
a relation and then averaging 10 random instances
from that relation. We repeat this process for each
NOTA vector.

6 Experiments and Results

En esta sección, we assess our two main contri-
butions. With respect to our Few-Shot TACRED
conjunto de datos, we show that models that perform well
on FewRel 2.0 perform poorly on this much
more realistic setting, leaving a huge gap for
improvement by future research. With respect to
our proposed NAV modeling approach, we show

that it is a viable, and advantageous, alternative to
the threshold approach.

Implemented Models We conduct our investi-
gation in the framework of the common embed-
ding based approach to FSL, with respect to the
MNAV, NAV, and threshold-based methods
described in §5. These methods are implemented
following the best-performing embedding and
similarity methods identified for the state-of-
the-art method on FewRel 1.0 (Baldini Soares
et al., 2019), a saber, BERTEM applied using
BERTBASE, and dot product similarity (§4).
Además, we train and evaluate the baseline
Sentence-Pair model, described in §4.1.

To select the number of NOTA vectors in the
MNAV model, we experimented with 5 differ-
ent values, que van desde 1 a 20. En la práctica,
the choice of the number of vectors had rather
little impact on the results (less than one F1
punto). We use the best performing value for this
hyperparameter, which was 20.

In terms of memory utilization, as 5-way 5-shot
episodes require feeding the 25 instances of the
support set in addition to the query instances into
BERT simultaneously, they often occupy nearly
the entire 32GB of GPU memory. To leverage
the memory taken by the support set instances,
we include as many queries as we can fit into
the GPU’s memory. En nuestros experimentos, nosotros estafamos-
struct 3 episodes for each sampled support set (por
sampling 3 different queries for it), which fully
utilizes the GPU capacity. Since these episodes
occupy the entire GPU memory, we use a single
episode per batch.

We further note that it may be possible to
perform the N-way classification by transforming
it into a pair-wise classification, repeated N times
(both in training and evaluation). This technique
would allow to reduce the memory usage but
would increases the run-time. As we managed to
fit the entire episode to our GPU memory, nosotros
followed the standard N-way approach, for faster
computation, as was previously done by Gao
et al. (2019).

Test Methodology and Metrics Like prior
trabajar, evaluation is conducted over randomly
sampled episodes from the test data, as described
in §2. Prior results for FewRel 2.0 (and FewRel
1.0) were reported in terms of Accuracy. Sin embargo,

700

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
3
9
2
1
9
5
5
1
5
5

/

/
t

yo

a
C
_
a
_
0
0
3
9
2
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Modelo
NOTA Rate

5-way 1-shot
5-way 5-shot
15% 50% 15% 50%

Sentence-Pair
Threshold
NAV
MNAV

77.67
63.41
77.17
79.06

80.31
76.48
81.47
81.69

84.19
65.43
82.97
85.52

86.06
78.95
87.08
87.74

Mesa 2: Accuracy results on FewRel2.0 test set,
for the four available settings for this benchmark.
Results are reported for the FewRel2.0 sentence-
pair baseline model and our investigated models.

in realistic, highly imbalanced, relation classifi-
cation datasets,
like our Few-Shot TACRED,
accuracy becomes meaningless. Por eso, nosotros pro-
pose micro F1 over the target relations as a more
appropriate measure for future research. Accord-
ingly, we report micro F1 for both datasets, como
well as accuracy for FewRel experiments, para
compatibility. For both measures we report aver-
age values and standard deviation over 5 diferente
random samples of episodes (Zhang et al., 2018,
2017). In all experiments, we train and evaluate
five models and report the results of the median
performing model. Unless otherwise mentioned,
reported result differences are significant under
one-tailed t-test at 0.05 confidence.

6.1 FewRel 2.0 Result

We first confirm the appropriateness of our inves-
tigation by comparing performance on the prior
FewRel 2.0 test data. Mesa 2 presents the figures
on the two official (synthetic) test NOTA rates for
this benchmark. Usamos 50% NOTA rate to train
all our models, con 6,000 episodes per epoch. Como
mostrado, the MNAV model performs best across all
FewRel settings, obtaining a new SOTA for this
task.8

We next turn to a more comprehensive com-
the investigated embedding-based
parison of
few-shot models, a saber, threshold-based, NAV,
and MNAV, over the publicly available FewRel
development set, con 50% NOTA rate. El
results in Table 3 muestra esa, here as well, el

8Our MNAV results are also reported at the official
FewRel 2.0 leader-board, as Anonymous Cat, at https://
thunlp.github.io/2/fewrel2 nota.html. Nosotros
note that the FewRel test set is kept hidden, where models
are submitted to the FewRel authors, who produce (solo)
accuracy scores.

Modelo

Metric

5-way 1-shot

5-way 5-shot

Sentence-Pair

Threshold

NAV

MNAV

Accuracy
F1
Accuracy
F1
Accuracy
F1
Accuracy
F1

75.48 ± 0.33%
71.85 ± 0.44%
76.32 ± 0.12%
73.34 ± 0.25%
78.54 ± 0.08%
75.00 ± 0.22%
78.23 ± 0.13%
75.22 ± 0.19%

78.43 ± 0.25%
75.43 ± 0.31%
80.30 ± 0.09%
78.89 ± 0.11%
80.44 ± 0.11%
79.20 ± 0.14%
81.25 ± 0.18%
80.06 ± 0.11%

Mesa 3: FewRel2.0 development set
accuracy and micro F1.

resultados,

modelo
Sentence-Pair
Threshold
NAV
MNAV

5-way 1-shot
10.19 ± 0.81%
6.87 ± 0.48%
8.38 ± 0.80%

5-way 5-shot

13.57 ± 0.46%
18.38 ± 2.01%
12.39 ± 1.01% 30.04 ± 1.92%

Mesa 4: Micro F1 results on Few-Shot TACRED.
For computational memory limitations, we could
not evaluate the Sentence-Pair model in the 5-shot
configuración, see Appendix for explanation.

MNAV model outperforms the others in both set-
tings. The gap between MNAV and the threshold
model is significant for the two settings, mientras que la
gap relative to the NAV model is significant only
in the 5-shot setting.

6.2 Few-Shot TACRED Results

We compare the MNAV, NAV, Sentence-Pair,
and threshold-based models over our more realis-
tic Few-Shot TACRED test set (aquí, epoch size
es 2,000). As seen in Table 4, the MNAV model
outperforms the others, as was the case over
FewRel 2.0.

Notablemente, performance is drastically lower over
Few-Shot TACRED. We suggest that this indi-
cates the much more challenging nature of a re-
alistic setting, relative to the FewRel 2.0 configuración,
while indicating the limitation of all current mod-
los. We further analyze this performance gap in
the next section.

7 Análisis

7.1 Differentiating Characteristics of FewRel

vs. Few-Shot TACRED

As seen in Tables 3 vs. 4, the results on Few-
Shot TACRED are drastically lower than those
obtained for FewRel 2.0, by at least 50 puntos. Todavía,
the performance figures are difficult to compare
due to several differences between the datasets,
including training size, NOTA rate, and different

701

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
3
9
2
1
9
5
5
1
5
5

/

/
t

yo

a
C
_
a
_
0
0
3
9
2
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

tions of relation types in the two subsets, haciendo
them equal, desde, as discussed in Section 3, este
distribution impacts performance in RC datasets.
Apparently, the impact of entity composition
was different in the 1-shot and 5-shot settings. Para
1-shot, the named entities subset yielded slightly
lower performance (6.65 vs. 9.03 micro F1), cual
is hard to interpret. For 5-shot, performance on
the named entities subset was substantially higher
than when including all entity types (33.48 vs.
18.74), possibly suggesting that a larger diversity
of entity types is more challenging for the model.
In any case, we argue that RC datasets should
include all entity types,
to reflect real-world
corpus.

Summary Overall, the differences we analyzed
account for much of the large performance gap
between the two datasets, particularly in the more
promising 5-shot setting. As argued earlier, nosotros
suggest that Few-Shot TACRED represents more
realistic properties of few-shot RC,
incluido
realistic non-uniform distribution, ‘‘no relation’’
instances and inclusion of all entity types, y
hence should be utilized in future evaluations.

7.2 Few-Shot versus Supervised TACRED

We next analyze the impact of category transfer
in Few-Shot TACRED. Para tal fin, we apply our
same MNAV model in a supervised (non-transfer)
configuración, termed Supervised MNAV, and compare
it to the few-shot MNAV (FSL MNAV). Estafa-
cretely, we trained the supervised MNAV model
on the training instances of the same categories
as those in the Few-Shot TACRED test data (vs.
training on different background relations in the
transfer-based FSL setting). The supervised model
was then tested for 5-way 5-shot classification
on Few-Shot TACRED, identically to the FSL
MNAV 5-way 5-shot testing in Table 4. El
results showed a 31 point gap, with the Super-
vised MNAV yielding 61.19 micro F1 while FSL
MNAV scored 30.04, indicating the substantial
challenge when moving from the supervised to
the category transfer setting.

7.3 Qualitative Error Analysis

To obtain some insight on current performance,
we manually analyzed 50 episodes for which the
model predicted an incorrect support class (el producto-
sion error) y 50 in which it missed identifying
the right support class (recall error). We sampled

Cifra 2: MNAV results on the FewRel 2.0 dev data-
set at different NOTA rates. The red points represent
performances at 97.5% NOTA rate which is the
Few-Shot TACRED NOTA rate. The blue and
green horizontal lines denote the Few-Shot TACRED
performance in the 1 y 5 shot settings, respectivamente.

entity types. To analyze the possible impact of
these differences, we control for each of them and
observe performance differences. Para ser breve, nosotros
focus on the MNAV model (1-shot and 5-shot).

Training Size We train the model on FewRel
2.0, taking the same amount of training instances
as in Few-Shot TACRED. Compared to full train-
En g, results dropped by five micro F1 points in
the 1-shot setting and by 1.5 points for 5-shot,
suggesting that the training size explains only a
small portion of the performance gap between the
two datasets.

NOTA Rates We control for the unrealistic
NOTA rate in FewRel 2.0 by training and evaluat-
ing our model on higher NOTA rates. The results
En figura 2 indicate that realistic higher NOTA
rates are indeed much more challenging: Moving
from the original FewRel 50% NOTA rate to the
97.5% rate as in Few-Shot TACRED shrank the
performance gap by 33 points in the 1-shot setting
and by 35 for 5-shot.

Entity Types
In this experiment, we evalu-
ate performance differences when including all
entity types (named entities, common nouns,
and pronouns), as in Few-Shot TACRED, versus
including only named entities, as in FewRel. A
this end, we sampled two corresponding subsets
of relation instances from Few-Shot TACRED, de
the same size, with either all entity types or named
entities only.9 Further, we control for the distribu-

9Entity types were automatically identified by the SpaCy
NER model (Honnibal and Montani, 2017), as well as certain
fixed types included in FewRel, such as ranks and titles.

702

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
3
9
2
1
9
5
5
1
5
5

/

/
t

yo

a
C
_
a
_
0
0
3
9
2
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

1-shot episodes since these can be more easily
interpreted, examining a single support instance
per class.

For the precision errors, we found a sin-
gle prominent characteristic. Across all sampled
episodes, both the query and the falsely selected
support instance shared the same (ordenado) pair of
entity types. Por ejemplo, they may both share the
entity types of person and location, albeit having
different relations, such as city of death vs. state of
residence, or having no meaningful relation for the
query (no relation case). This behavior suggests
that pre-training, together with fine tuning on the
background relations, allowed the BERT-based
model to learn to distinguish entity types, to realize
their criticality for the RC task, and to successfully
match entity types between a query and a support
instancia. Por otro lado, the low overall perfor-
mance suggests that the model does not recognize
well the patterns indicating a target relation based
on a small support set. Additional evidence for this
conjecture is obtained when examining confused
class pairs in the predictions’ confusion matrices
(1-shot and 5-shot settings). Out of 10 confused
class pairs, 8 pairs have matching entity types;
in the other two pairs, the location type is con-
fused with organization in the context of school
attended, which often carries a sense of location.
For the recall errors, manual inspection of the
50 episodes did not reveal any prominent insights.
Por lo tanto, we sampled 100,000 1-shot episodes
over which we analyzed various statistics which
may be related to recall errors. De estos, nosotros
present two analyses that seem to explain aspects
of recall misses, in a statistically significant man-
ner (one-tailed t-test at 0.01 significance level),
though only to a partial extent.

The first analysis examines the impact of
whether the relative order of the two marked argu-
ment entities flips between the query and support
instance sentences. Con ese fin, we examined
the about 2,600 episodes in our sample in which
the query belongs to one of the support classes.
We found that for episodes in which argument
order is consistent across the query and support
instancia, the model identified the correct class
en 15.68% of the cases, while when the order is
flipped only 10.95% of the episodes are classified
correctly. This suggests that a flipped order makes
it more challenging for the model to match the
relation patterns across the query and support sen-
tenencias. The second analysis examines the impact

of lexical overlap between the query and support
instancia. Con ese fin, we compared 300 episodes
in which the correct support class was success-
fully identified (true positive) y 300 en el cual
it was missed (false negative). In each episode,
we measured Intersection over Union (IoU) (aka
Jaccard Index) for the two sets of lemmas in
the query and support instance. As expected, el
IoU value was significantly higher for the true
positive set (0.17) than for the false negative set
(0.12), suggesting that higher lexical match eases
recognizing the correct support instance.

8 Conclusions

En este trabajo, we offer several required criteria for
realistic FSL datasets, while proposing a method-
ology to derive such benchmarks from available
datasets designed for supervised learning. Nosotros entonces
applied our methodology on the TACRED rela-
tion classification dataset, creating a challenging
benchmark for future research. En efecto, previ-
ous models that achieved impressive results on
FewRel, a synthetic dataset for FSL, failed mis-
erably on our naturally distributed dataset. Estos
results call for better models and loss functions for
FSL, and indicate that we are far from having satis-
fying results on this setup. Our methodology may
be further applied to additional datasets, enriching
the availability of realistic datasets for FSL.

Próximo, we analyzed the constraints imposed
embedding functions by nearest-neighbor classi-
fication schemes, common for FSL. este análisis
led us to derive a new method for representing the
NOTA category as one or more explicit learned
vectores, yielding a novel classification scheme,
which achieves new state-of-the-art performance.
We suggest that our analysis may further inspire
additional innovations in few-shot learning.

Referencias

Livio Baldini Soares, Nicholas FitzGerald, jeffrey
Abadejo, and Tom Kwiatkowski. 2019. Matching
the blanks: Distributional similarity for relation
aprendiendo. In Proceedings of the 57th Annual
Meeting of the Association for Computational
Lingüística, pages 2895–2905, Florencia, Italia.
Asociación de Lingüística Computacional.
https://doi.org/10.18653/v1/P19
-1279

703

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
3
9
2
1
9
5
5
1
5
5

/

/
t

yo

a
C
_
a
_
0
0
3
9
2
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Yujia Bao, Menghua Wu, Shiyu Chang,
texto
and Regina Barzilay. 2020a. Few-shot
classification with distributional signatures.
En Conferencia Internacional sobre Aprendizaje
Representaciones.

Yujia Bao, Menghua Wu, Shiyu Chang,
and Regina Barzilay. 2020b. Few-shot
texto
classification with distributional signatures.
En Conferencia Internacional sobre Aprendizaje
Representaciones.

Jacob Devlin, Ming-Wei Chang, Kenton Lee, y
Kristina Toutanova. 2019. BERT: Pre-entrenamiento
de transformadores bidireccionales profundos para el lenguaje
comprensión. En procedimientos de
el 2019
Conference of the North American Chapter of
la Asociación de Lingüística Computacional:
Tecnologías del lenguaje humano, Volumen 1
(Artículos largos y cortos), páginas 4171–4186,
Mineápolis, Minnesota. Asociación para Com-
Lingüística putacional.

Paramveer S. Dhillon, Partha Pratim Talukdar,
and Koby Crammer. 2010. Learning better data
representation using inference-driven metric
aprendiendo. En procedimientos de
the ACL 2010
Conference Short Papers, pages 377–381,
Uppsala, Suecia. Asociación de Computación-
lingüística nacional.

Chelsea Finn, Pieter Abbeel, and Sergey Levine.
2017. Model-agnostic meta-learning for fast
adaptation of deep networks. En procedimientos de
the 34th International Conference on Machine
Aprendiendo, volumen 70 of Proceedings of Ma-
chine Learning Research, pages 1126–1135,
International Convention Centre, Sídney,
Australia. PMLR.

Tianyu Gao, Xu Han, Hao Zhu, Zhiyuan Liu,
Peng Li, Maosong Sun, and Jie Zhou. 2019.
FewRel 2.0: Towards more challenging few-
shot relation classification. En procedimientos de
el 2019 Conferencia sobre métodos empíricos
in Natural Language Processing and the 9th
International Joint Conference on Natural
(EMNLP-IJCNLP),
Idioma
pages 6250–6255, Hong Kong, Porcelana. también-
ciation for Computational Linguistics.

Procesando

Xu Han, Hao Zhu, Pengfei Yu, Ziyun Wang, Yuan
Yao, Zhiyuan Liu, and Maosong Sun. 2018.
FewRel: A large-scale supervised few-shot

704

relation classification dataset with state-of-the-
el 2018
art evaluation. En procedimientos de
Jornada sobre Métodos Empíricos en Natu-
Procesamiento del lenguaje oral, pages 4803–4809,
Bruselas, Bélgica. Asociación de Computación-
lingüística nacional. https://doi.org/10
.18653/v1/D18-1514

Elad Hoffer and Nir Ailon. 2015. Deep metric
learning using triplet network. In 3rd Inter-
national Conference on Learning Representa-
ciones, ICLR 2015, San Diego, California, EE.UU, Puede
7–9, 2015, Workshop Track Proceedings.

Matthew Honnibal and Ines Montani. 2017. Spacy
2: Natural language understanding with bloom
embeddings. Convolutional Neural Networks
and Incremental Parsing, 7(1).

Omer Levy, Minjoon Seo, Eunsol Choi, y
Lucas Zettlemoyer. 2017. Zero-shot relation
En
extraction via reading comprehension.
Proceedings of the 21st Conference on Compu-
tational Natural Language Learning (CONLL
2017), pages 333–342, vancouver, Canada.
Asociación de Lingüística Computacional.
https://doi.org/10.18653/v1/K17
-1034

Wenbin Li, Lei Wang, Jinglin Xu, Jing Huo, Cual
gao, and Jiebo Luo. 2019. Revisiting local
descriptor based image-to-class measure for
few-shot learning. In CVPR, pages 7260–7268.
Computer Vision Foundation / IEEE.

Zuheng Ming,

Joseph Chazalon, Muham-
mad Muzzamil Luqman, Muriel Visani, y
Jean-Christophe Burie. 2017. Simple triplet loss
based on intra/inter-class metric learning for
face verification. En 2017 IEEE International
Conference on Computer Vision Workshops,
ICCV Workshops 2017, Venice, Italia, Octubre
22–29, 2017, pages 1656–1664. IEEE Com-
puter Society. https://doi.org/10.1109
/ICCVW.2017.194

Mike Mintz, Steven Bills, Rion Snow, y
Daniel Jurafsky. 2009. Distant supervision for
labeled data. En
relation extraction without
Joint Conference of
Actas de
the 47th Annual Meeting of
the ACL and
Joint Conference
el
on Natural Language Processing of
el
AFNLP, pages 1003–1011, Suntec, Singapur.

Internacional

4th

el

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
3
9
2
1
9
5
5
1
5
5

/

/
t

yo

a
C
_
a
_
0
0
3
9
2
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Asociación de Lingüística Computacional.
https://doi.org/10.3115/1690219
.1690287

Makoto Miwa and Mohit Bansal. 2016. End-
to-end relation extraction using LSTMs on
sequences and tree structures. En procedimientos
of the 54th Annual Meeting of the Association
para Lingüística Computacional (Volumen 1: Largo
Documentos), pages 1105–1116, Berlina, Alemania.
Asociación de Lingüística Computacional.
https://doi.org/10.18653/v1/P16
-1105

Abiola Obamuyide and Andreas Vlachos. 2018.
relation classification as textual
Zero-shot
the First
En procedimientos de
entailment.
Workshop on Fact Extraction and VERification
(FEVER), pages 72–78, Bruselas, Bélgica.
Asociación de Lingüística Computacional.
https://doi.org/10.18653/v1/W18
-5511

Abiola Obamuyide and Andreas Vlachos. 2019.
Model-agnostic meta-learning for relation clas-
sification with
En
limitado
Proceedings of the 57th Annual Meeting of the
Asociación de Lingüística Computacional,
pages 5873–5879, Florencia, Italia. Asociación
para Lingüística Computacional. https://
doi.org/10.18653/v1/P19-1589

supervision.

Sachin Ravi

and Hugo Larochelle. 2017.
Optimization as a model for few-shot learn-
En g. In 5th International Conference on Learn-
ing Representations,
ICLR 2017, Toulon,
Francia, April 24–26, 2017, Conference Track
Actas. OpenReview.net.

Florian Schroff, Dmitry Kalenichenko, and James
Philbin. 2015. FaceNet: A unified embedding
for face recognition and clustering. In IEEE
Conference on Computer Vision and Pattern
Reconocimiento, CVPR 2015, Bostón, MAMÁ, EE.UU,
June 7–12, 2015, pages 815–823.
IEEE
Computer Society. https://doi.org/10
.1109/CVPR.2015.7298682

Chunhua Shen, Junae Kim, and Lei Wang. 2010.
Scalable large-margin mahalanobis distance
metric learning. IEEE Transactions on Neural
Networks 21(9):1524–1530. https://doi
.org/10.1109/TNN.2010.2052630,
PubMed: 20709641

Jake Snell, Kevin Swersky,
2017. Prototypical
aprendiendo.

y ricardo
para
redes
Zemel.
En avances en neurología
few-shot
Sistemas de procesamiento de información, volumen 30,
pages 4077–4087. Asociados Curran, Cª.

En procedimientos de

Ming Tan, Yang Yu, Haoyu Wang, Dakuo Wang,
Saloni Potdar, Shiyu Chang, and Mo Yu. 2019.
Out-of-domain detection for low-resource text
el
classification tasks.
2019 Conference on Empirical Methods in
Natural Language Processing and the 9th
International Joint Conference on Natural
(EMNLP-IJCNLP),
Idioma
paginas
3566–3572, Hong Kong, Porcelana.
Asociación de Lingüística Computacional.
https://doi.org/10.18653/v1/D19
-1364

Procesando

Hung-Yu Tseng, Hsin-Ying Lee, Jia-Bin Huang,
and Ming-Hsuan Yang. 2020. Cross-domain
few-shot classification via learned feature-wise
transformación. In International Conference on
Learning Representations.

Oriol Vinyals, Charles Blundell, Timothy
Lillicrap, Koray Kavukcuoglu, and Daan
Wierstra. 2016. Matching networks for one
En-
shot
formation Processing Systems, volumen 29,
pages 3630–3638. Asociados Curran, Cª.

En avances en neurología

aprendiendo.

Linlin Wang, Zhu Cao, Gerardo de Melo, y
Zhiyuan Liu. 2016. Relation classification via
multi-level attention CNNs. En procedimientos de
the 54th Annual Meeting of the Association for
Ligüística computacional (Volumen 1: Largo
Documentos), pages 1298–1307, Berlina, Alemania.
Asociación de Lingüística Computacional.
https://doi.org/10.18653/v1/P16
-1123

Kilian Q. Weinberger and Lawrence K. Saul.
2009. Distance metric learning for large margin
nearest neighbor classification. Diario de
Machine Learning Research, 10(2):207–244.

Yuan Yao, Deming Ye, Peng Li, Xu Han, Yankai
lin, Zhenghao Liu, Zhiyuan Liu, Lixin Huang,
Jie Zhou, and Maosong Sun. 2019. DocRED: A
large-scale document-level relation extraction
conjunto de datos. In Proceedings of the 57th Annual
Meeting of the Association for Computational
Lingüística, pages 764–777, Florencia, Italia.

705

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
3
9
2
1
9
5
5
1
5
5

/

/
t

yo

a
C
_
a
_
0
0
3
9
2
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Asociación de Lingüística Computacional.
https://doi.org/10.18653/v1/P19
-1074

Mo Yu, Xiaoxiao Guo, Jinfeng Yi, Shiyu Chang,
Saloni Potdar, Yu Cheng, Gerald Tesauro,
Haoyu Wang, and Bowen Zhou. 2018. Diverso
few-shot text classification with multiple met-
rics. En Actas de la 2018 Conference of
the North American Chapter of the Association
para Lingüística Computacional: Human Lan-
guage Technologies, Volumen 1 (Artículos largos),
pages 1206–1215, Nueva Orleans, Luisiana.
Asociación de Lingüística Computacional.
https://doi.org/10.18653/v1/N18
-1109

Yuhao Zhang, Peng Qi, and Christopher D.
Manning. 2018. Graph convolution over pruned
dependency trees improves relation extraction.
el 2018 Conferencia sobre
En procedimientos de
Métodos empíricos en Natural Language Pro-
cesando, pages 2205–2215, Bruselas, Bélgica.
Asociación de Lingüística Computacional.
https://doi.org/10.18653/v1/D18
-1244

Yuhao Zhang, Victor Zhong, Danqi Chen, Gabor
Angeli, and Christopher D. Manning. 2017.
Position-aware attention and supervised data
improve slot filling. En procedimientos de
el
2017 Conference on Empirical Methods in
Natural Language Processing, pages 35–45,
Copenhague, Dinamarca. Asociación para Com-
Lingüística putacional. https://doi.org
/10.18653/v1/D17-1004

A Appendix

A.1 Proof of Inequality 2

We prove that the two sides of the inequality
are necessary and sufficient to guarantee perfect
classification by the threshold-based classification
regla, over all possible episodes in a given dataset.
We first prove necessity. As the LHS refers
to σr(q), it is relevant only for episodes where
q belongs to one of the support classes. If it
is violated for some episode, then that episode
cannot be classified to r(q) (the correct class)

by the threshold-based classification rule. Como para
RHS necessity, consider an episode in which
sim(q, σ¬r(q)) > θ, violating the RHS. Sin
pérdida de generalidad, we can construct a possible
episode with the same q and σ¬r(q), whose correct
classification is NOTA (making sure to exclude
r(q) from the support classes). This episode cannot
be correctly classified by the classification rule to
NOTA, since q’s similarity to at least one class,
¬r(q), surpasses θ.

To prove sufficiency, we consider the two
cases where an episode’s correct classification
is either NOTA or one of the support classes.
If the correct classification is NOTA, then r(q)
is not within the Support Set. The RHS then
guarantees that sim(q, σ¬r(q)) < θ for all support classes, implying a correct NOTA classification. Otherwise, the correct classification is r(q), being one of the support classes. In this case, the LHS guarantees excluding a NOTA classification, while the RHS excludes classification to any other category different than r(q). QED. A.2 Sentence-Pair High GPU Demand The Sentence-Pair model (Gao et al., 2019) requires at least twice more GPU memory than a standard embedding learning model, such as the threshold model (described in Sec 5). The higher memory demand arises from feeding BERT with the concatenation of each support instance to each query instance. This concatenation effectively doubles the average input sentence length. Due to the Transformer architecture, doubling the input sentence length requires higher GPU RAM memory. In particular, a fully connected layer requires four times more memory when fed with a double-length sequence. Hence, representation learning models, which embed a single instance into an embedded vector space, are more memory efficient than the sentence-pair model. for As mentioned in Section 6.2, we could not the Few- train the sentence-pair model Shot TACRED 5-shot setting, due to memory limitations, even though we used NVIDIA TESLA V100-32GB GPU. This stems from the fact that the average sentence length in Few-Shot TACRED is higher than in FewRel, which did not fit into our server memory with the higher memory consumption of the sentence-pair model. 706 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / t a c l / l a r t i c e - p d f / d o i / . 1 0 1 1 6 2 / t l a c _ a _ 0 0 3 9 2 1 9 5 5 1 5 5 / / t l a c _ a _ 0 0 3 9 2 p d . f b y g u e s t t o n 0 8 S e p e m b e r 2 0 2 3Revisando la clasificación de relaciones de pocas posibilidades: imagen

Descargar PDF