AMR Similarity Metrics from Principles

AMR Similarity Metrics from Principles

Juri Opitz and Letitia Parcalabescu and Anette Frank

Department for Computational Linguistics
Heidelberg University
69120 Heidelberg

opitz,parcalabescu,坦率

@cl.uni-heidelberg.de

}

{

抽象的

Different metrics have been proposed to com-
pare Abstract Meaning Representation (AMR)
图表. The canonical SMATCH metric (Cai
and Knight, 2013) aligns the variables of two
graphs and assesses triple matches. 最近的
SEMBLEU metric (Song and Gildea, 2019) 是
based on the machine-translation metric BLEU
(Papineni et al., 2002) and increases compu-
tational efficiency by ablating the variable-
结盟. 在本文中, 我) we establish criteria
that enable researchers to perform a principled
assessment of metrics comparing meaning rep-
resentations like AMR; 二) we undertake a
thorough analysis of SMATCH and SEMBLEU
where we show that the latter exhibits some
undesirable properties. 例如, it does
not conform to the identity of indiscernibles
rule and introduces biases that are hard to
控制; and iii) we propose a novel metric
S2
to only
very slight meaning deviations and targets the
fulfilment of all established criteria. We assess
its suitability and show its advantages over
SMATCH and SEMBLEU.

is more benevolent

MATCH that

1 介绍

Proposed in 2013, the aim of Abstract Meaning
Representation (AMR) is to represent a sentence’s
meaning in a machine-readable graph format
(Banarescu et al., 2013). AMR graphs are rooted,
acyclic, 指导的, and edge-labeled. Entities, 事件,
特性, and states are represented as variables
that are linked to corresponding concepts (encoded
as leaf nodes) via is-instance relations (比照. 数字 1,
左边). This structure allows us to capture complex
linguistic phenomena such as coreference, seman-
tic roles, or polarity.

When measuring the similarity between two
AMR graphs A and B, for instance for the pur-
pose of AMR parse quality evaluation, the metric
of choice is usually SMATCH (Cai and Knight,
2013). Its backbone is an alignment-search be-

522

tween the graphs’ variables. 最近, the SEMBLEU
metric (Song and Gildea, 2019) has been proposed
that operates on the basis of a variable-free AMR
(数字 1, 正确的),1 converting it to a bag of k-grams.
Circumventing a variable alignment search redu-
ces computational cost and ensures full determi-
nacy. 还, grounding the metric in BLEU (Papineni
等人。, 2002) has a certain appeal, since BLEU is
quite popular in machine translation.

然而, we find that we are lacking a
principled in-depth comparison of the properties of
different AMR metrics that would help informing
researchers to answer questions such as: 哪个
metric should I use to assess the similarity of
two AMR graphs, 例如, in AMR parser evaluation?
What are the trade-offs when choosing one metric
over the other? Besides providing criteria for such
a principled comparison, we discuss a property
that none of the existing AMR metrics currently
satisfies: They do not measure graded meaning
差异. Such differences may emerge because
of near-synonyms such as ruin – annihilate; skinny
– thin – slim; enemy – foe (Inkpen and Hirst,
2006; Edmonds and Hirst, 2002) or paraphrases
such as be able to – can; unclear – not clear. 在
a classical syntactic parsing task, metrics do not
need to address this issue because input tokens are
typically projected to lexical concepts by lemma-
蒂化, hence two graphs for the same sentence
tend not to disagree on the concepts projected
in semantic
from the input. This is different
parsing where the projected concepts are often
more abstract.

This article is structured as follows: We first
establish seven principles that one may expect a
metric for comparing meaning representations to

1Most research papers on AMR display the graphs in this
‘‘shallow’’ form. This increases simplicity and readability.
(Lyu and Titov, 2018; Konstas et al., 2017; 张等人。,
2019; Damonte and Cohen, 2019; Song et al., 2016).

计算语言学协会会刊, 卷. 8, PP. 522–538, 2020. https://doi.org/10.1162/tacl 00329
动作编辑器: Adam Lopez. 提交批次: 11/2019; 修改批次: 3/2020; 已发表 9/2020.

2020 计算语言学协会. 根据 CC-BY 分发 4.0 执照.

C
(西德:13)

D

w
n

A
d
e
d

F
r


H

t
t

p

:
/
/

d

r
e
C
t
.


t
.

e
d

/
t

A
C

/

A
r
t

C
e

p
d

F
/

d


/

.

1
0
1
1
6
2

/
t

A
C
_
A
_
0
0
3
2
9
1
9
2
3
3
1
2

/

/
t

A
C
_
A
_
0
0
3
2
9
p
d

.

F


y
G

e
s
t

t


n
0
9
S
e
p
e


e
r
2
0
2
3

the following constraint on metric:
[0, 1].2
二.

indiscernibles

identity of

D × D →

This focal

principle is formalized by metric(A, 乙) = 1

A = B. It is violated if a metric assigns a value
indicating equivalence to inputs that are not
equivalent or if it considers equivalent inputs as
不同的.

三、. symmetry In many cases, we want a metric
to be symmetric: metric(A, 乙) = metric(乙, A).
A metric violates this principle if it assigns a
pair of objects different scores when argument
order is inverted. Together with principles I and
二, it extends the scope of the metric to usages
beyond parser evaluation, as it also enables sound
IAA calculation, clustering, and classification
of AMR graphs when we use the metric as a
kernel (例如, 支持向量机). In parser evaluation, one may
dispense with any (强的)
要求
symmetry—however, the metric must then be
applied in a standardized way, with a fixed order
of arguments.

In cases where there is no defined reference,
the asymmetry could be handled by aggregating
metric(A, 乙) and metric(乙, A), 例如,
using the mean. 然而, it is open what aggre-
gation is best suited and how to interpret re-
苏丹, 例如, for metric(A, 乙) = 0.1 和
metric(乙, A) = 0.9.

IV. determinacy Repeated calculation over the
same inputs should yield the same score. 这
principle is clearly desirable as it ensures reprodu-
cibility (a very small deviation may be tolerable).
The next three principles we believe to be
desirable specifically when comparing meaning
representation graphs such as AMR (Banarescu
等人。, 2013). The first two of the following prin-
ciples are motivated by computer science and
语言学, whereas the last one is motivated by a
linguistic and an engineering perspective.

V. no bias Meaning representations consist of
nodes and edges encoding specific information
类型. Unless explicitly justified, a metric should
not unjustifiably or in unintended ways favor
correctness or penalize errors for specific substruc-
特雷斯 (例如, leaf nodes). In case a metric favors or
penalizes certain substructures more than others,
in the interest of transparency, this should be made
clear and explicit, and should be easily verifiable

2At some places in this paper, due to conventions, 我们

project this score onto [0,100] and speak of points.

数字 1: A cat drinks water. Simplified AMR graph
and underlying deep form with is-instance relations
(—-我) from variables (solid) to concepts (dashed).

§

§

3). We then develop S2

satisfy, in order to obtain meaningful and appro-
2). Based on
priate scores for the given purpose (
these principles we provide an in-depth analysis
of the properties of the AMR metrics SMATCH
MATCH,
and SEMBLEU (
an extension of SMATCH that abstracts away from
a purely symbolic level, allowing for a graded
semantic comparison of atomic graph-elements
4). By this move, we enable SMATCH to take into
(
§
account fine-grained meaning differences. 我们
show that our proposed metric retains valuable
benefits of SMATCH, but at the same time is more
to slight meaning deviations. 我们的
benevolent
code is available online at https://github.
com/Heidelberg-NLP/amr-metric-suite.

2 From Principles to AMR Metrics

The problem of comparing AMR graphs A, 乙

尊重地


to the meaning they express
D
occurs in several scenarios, 例如, parser
evaluation or inter-annotator agreement calcula-
的 (国际航空协会). To measure the extent to which A
and B agree with each other, we need a metric:
D × D → R that returns a score reflecting meaning
distance or meaning similarity (for convenience,
we use similarity). Below we establish seven
principles that seem desirable for this metric.

2.1 Seven Metric Principles

The first four metric principles are mathemati-
cally motivated:

我. continuity, non-negativity and upper-bound
A similarity function should be continuous, 和
two natural edge cases: A, B are equivalent (maxi-
mum similarity) or unrelated (minimum simi-
larity). By choosing 1 as upper bound, we obtain

523

D

w
n

A
d
e
d

F
r


H

t
t

p

:
/
/

d

r
e
C
t
.


t
.

e
d

/
t

A
C

/

A
r
t

C
e

p
d

F
/

d


/

.

1
0
1
1
6
2

/
t

A
C
_
A
_
0
0
3
2
9
1
9
2
3
3
1
2

/

/
t

A
C
_
A
_
0
0
3
2
9
p
d

.

F


y
G

e
s
t

t


n
0
9
S
e
p
e


e
r
2
0
2
3

and consistent. 例如, if we wish to give
negation of the main predicate of a sentence a
two-times higher weight compared with negation
in an embedded sentence, we want this to be made
transparent. A concrete example for a transparent
bias is found in Cai and Lam (2019). They analyze
the impact of their novel top–down AMR parsing
strategy by integrating a root-distance bias into
SMATCH to focus on structures situated at the top
of a graph.

We now turn to properties that focus on the
nature of the objects we aim to compare: 图形-
based compositional meaning representations.
These graphs consist of atomic conditions that
determine the circumstances under which a
sentence is true. 因此, our metric score should
increase with increasing overlap of A and B,
which we denote f (A, 乙), the number of matching
状况. This overlap can be viewed from
a symbolic or/and a graded perspective (cf.,
例如, Schenker et al. [2005], who denote these
perspectives as ‘‘syntactic’’ vs. ‘‘semantic’’).
From the symbolic perspective, we compare the
nodes and edges of two graphs on a symbolic
等级, while from the graded perspective, 我们
take into account the degree to which nodes and
edges differ. Both types of matching involve a
precondition: If A and B contain variables, 我们
need a variable-mapping in order to match con-
ditions from A and B.3

六、. matching (graph-based) meaning repre-
sentations – symbolic match A natural symbolic
overlap-objective can be found in the Jaccard
index J (Jaccard, 1912; Real and Vargas, 1996;
Papadimitriou et al., 2010): Let t(G) be the set
of triples of graph G, F (A, 乙) =
t(乙)
|
the size of the overlap of A, 乙, and z(A, 乙) =
the size of their union. 然后, 我们
t(A)
|
wish that A and B are considered more simi-
lar to each other than A and C iff A and B
exhibit a greater relative agreement in their (符号-
bolic) 状况: metric(A, 乙) > metric(A, C)
z(A,C) = J(A, C). 一个

allowed exception to this monotonic relationship

z(A,乙) = J(A, 乙) > f (A,C)

t(A)
|

t(乙)

F (A,乙)

|

x2, 实例, 猫

3例如, consider graph A in Figure 1 和它的
x1, 实例, drink-1
set of triples t(A):
,

{H
ih
. 什么时候
x3, 实例, 水
x1, arg1, x3i
x1, arg0, x2i
,
,
H
H
H
comparing A against graph B we need to judge whether
a triple t
t(A) is also contained in B: t
t(乙). 为了
这, we need a mapping map: vars(A)
vars(乙) 在哪里
such that f
vars(A) =
x1, .., xn}
{
is maximized.



y1, .., ym}

, vars(乙) =

我}

{

can occur if we want to take into account a
graded semantic match of atomic graph elements
or sub-structures, which we will now elaborate on.
VII. matching (graph-based) meaning repre-
sentations – graded semantic match: 一
motivation for this principle can be found in
工程, 例如, when assessing the
quality of produced parts. 这里, small deviations
from a reference may be tolerable within certain
limits. 相似地, two AMR graphs may match
almost perfectly—except for two small divergent
成分. The extent of divergence can be
measured by the degree of similarity of the
two divergent components. In our case, 我们需要
linguistic knowledge to judge what degree of
divergence we are dealing with and whether it is
tolerable.

H

X, 实例, conceptA

例如, consider that graph A contains a
and graph B a triple
triple
y, 实例, conceptB
, while otherwise the graphs

H
are equivalent, and the alignment has set x =
y. Then f (A, 乙) should be higher when conceptA
is similar to conceptB compared to the case where
conceptA is dissimilar to conceptB. In AMR,
concepts are often abstract, so near-synonyms may
even be fully admissible (enemy–foe). 虽然
这样的 (near-)synonyms are bound to occur fre-
quently when we compare AMR graphs of dif-
ferent sentences that may contain paraphrases, 我们
will see, in Section
4, that this can also occur
in parser evaluation, where two different graphs
represent the same sentence. By defining metric
to map to a range [0,1] we already defined it to
be globally graded. 这里, we desire that graded
similarity may also hold of minimal units of AMR
图表, such as atomic concepts or even sub-
图表, 例如, to reflect that injustice(X)
is very similar to justice(X)

polarity(X,

-
2.2 AMR Metrics: SMATCH and SEMBLEU

).

§

With our seven principles for AMR similarity
metrics in place, we now introduce SMATCH and
SEMBLEU, two metrics that differ in their design
and assumptions. We describe each of them in
detail and summarize their differences, setting the
stage for our in-depth metric analysis (

3).

§

Align and match – SMATCH The SMATCH metric
operates in two steps. 第一的, (我) we align the varia-
bles in A and B in the best possible way, by finding
a mapping map⋆: vars(A)
vars(乙) 那
yields a maximal set of matching triples between

524

D

w
n

A
d
e
d

F
r


H

t
t

p

:
/
/

d

r
e
C
t
.


t
.

e
d

/
t

A
C

/

A
r
t

C
e

p
d

F
/

d


/

.

1
0
1
1
6
2

/
t

A
C
_
A
_
0
0
3
2
9
1
9
2
3
3
1
2

/

/
t

A
C
_
A
_
0
0
3
2
9
p
d

.

F


y
G

e
s
t

t


n
0
9
S
e
p
e


e
r
2
0
2
3

H

H

=

t(A)

希, 相对, xj

i ∈
yk, 相对, ym

H
map⋆(希) 相对, map⋆(xj)

A and B. 例如, 如果

i ∈
t(乙), we obtain one triple match. (二) We compute
Precision, Recall, and F1 score based on the set of
triples returned by the alignment search. The NP-
hard alignment search problem of step (我) is solved
with a greedy hill-climber: Let fmap(A, 乙) 是
the count of matching triples under any mapping
function map. 然后,

map⋆ = argmax

map

fmap(A, 乙)

(1)

Multiple restarts with different seeds increase

the likelihood of finding better optima.

Simplify and match – SEMBLEU The SEMBLEU
metric in Song and Gildea (2019) can also be
described as a two-step procedure. But unlike
SMATCH it operates on a variable-free reduction
of an AMR graph G, which we denote by Gvf
(vf : variable-free, 数字 1, right-hand side).

In a first step, (我) SEMBLEU performs k-gram
extraction from Avf and Bvf in a breadth-first
traversal (path extraction). It then (二) adopts the
BLEU score from MT (Papineni et al., 2002) 到
calculate an overlap score based on the extracted
bags of k-grams:

n

SEMBLEU = BP

经验值

·

min

BP = e

1
(西德:26)
-

Bvf
Avf

|

|

k=1
X
,0

(西德:27)

|

|

wk log pk

!

(2)

(3)

kgram(Avf )

kgram(Bvf )


kgram(Avf )

where pk is BLEU’s modified k-gram precision that
measures k-gram overlap of a candidate against a
. wk is the
reference: pk = |
(typically uniform) weight over chosen k-gram
sizes. SEMBLEU uses NIST geometric probability
平滑化 (Chen and Cherry, 2014). The recall-
focused ‘‘brevity penalty’’ BP returns a value
smaller than 1 when the candidate length

smaller than the reference length

Avf
|

|

|

|

|

The graph traversal performed in SEMBLEU
starts at the root node. During this traversal it
simplifies the graph by replacing variables with
their corresponding concepts (见图 1: 这
node c becomes DRINK-01) and collects visited
nodes and edges in uni-, 双- and tri grams (k = 3
is recommended). 这里, a source node together
with a relation and its target node counts as a
bi-gram. For the graph in Figure 1, the extracted

unigrams are

猫, 水, drink-01

;

Bvf
|

.
|

{

}

extracted bi grams are
drink-01arg2 water

.

}

drink-01 arg1 cat,

{

SMATCH vs. SEMBLEU in a nutshell SEMBLEU
differs significantly from SMATCH. A key dif-
ference is that SEMBLEU operates on reduced
variable-free AMR graphs (Gvf )—instead of full-
fledged AMR graphs. By eliminating variables,
SEMBLEU bypasses an alignment search. 这
makes the calculation faster and alleviates a
weakness of SMATCH: The hill-climbing search
is slightly imprecise. 然而, SEMBLEU is not
guided by aligned variables as anchors. 反而,
SEMBLEU uses an n-gram statistic (蓝线)

compute an overlap score for graphs, 基于
k-hop paths extracted from Gvf , using the root
node as the start for the extraction process.
SMATCH, 相比之下, acts directly on variable-
bound graphs matching triples based on a selected
结盟. If in some application we wanted it,
both metrics allow the capturing of more ‘‘global’’
graph properties: SEMBLEU can increase its k-
parameter and SMATCH may match conjunctions

In the following
三元组.
分析, 然而, we will adhere to their default
configurations because this is how they are used
in most applications.

(interconnected)

3 Assessing AMR Metrics with Principles

This section evaluates SMATCH and SEMBLEU
against the seven principles we established above
by asking: Why does a metric satisfy or violate a
given principle? and What does this imply? 我们
start with principles from mathematics.

我. Continuity, non-negativity, and upper-bound
This principle is fulfilled by both metrics as they
[0, 1].
are functions of the form metric :

D×D →

二. Identity of indiscernibles This principle is
基本的: An AMR metric must return maxi-
mum score if and only if the graphs are equivalent
in meaning. 然而, there are cases where SEMBLEU, 在
contrast to SMATCH, does not satisfy this principle.
数字 2 shows an example.

这里, SEMBLEU yields a perfect score for two
AMRs that differ in a single but crucial aspect:
Two of its ARGx roles are filled with arguments that
are meant to refer to distinct individuals that share
the same concept. The graph on the left is an ab-
straction of, 例如, The man1 sees the other
man2 in the other man2, while the graph on the
right is an abstraction of The man1 sees himself 1

D

w
n

A
d
e
d

F
r


H

t
t

p

:
/
/

d

r
e
C
t
.


t
.

e
d

/
t

A
C

/

A
r
t

C
e

p
d

F
/

d


/

.

1
0
1
1
6
2

/
t

A
C
_
A
_
0
0
3
2
9
1
9
2
3
3
1
2

/

/
t

A
C
_
A
_
0
0
3
2
9
p
d

.

F


y
G

e
s
t

t


n
0
9
S
e
p
e


e
r
2
0
2
3

525

数字 2: Two AMRs with semantic roles filled
不同地, SEMBLEU considers them as equivalent.

in the other man2. SEMBLEU does not recognize
the difference in meaning between a reflexive
and a non-reflexive relation, assigning maximum
similarity score, whereas SMATCH reflects such
differences appropriately because it accounts for
变量.

数字 3: Symmetry violation for two parses of Things
are so heated between us, I don’t know what to do.

symmetry violation

Graph banks

svr (%, ∆>0.0001)
SEMBLEU
SMATCH

msv (in points)

SMATCH

SEMBLEU

(

+

).
|


|

A, 乙

∈ D

(SMATCH) 加

总共, SEMBLEU does not satisfy principle II
because it operates on a variable-free reduction
of AMRs (Gvf ). One could address this prob-
lem by reverting to canonical AMR graphs and
adopting variable alignment in SEMBLEU. 但是这个
would adversely affect the advertised efficiency
advantages over SMATCH. Re-integrating the align-
ment step would make SEMBLEU less efficient than
SMATCH because it would add the complexity of
breadth-first traversal, yielding a total complexity

V
|

|
三、. Symmetry This principle is fulfilled if
: metric(A, 乙) = metric(乙, A).

数字 3 shows an example where SEMBLEU does
not comply with this principle, to a significant
extent: When comparing AMR graph A against
乙, it yields a score greater than 0.8, 然而, 什么时候
comparing B to A the score is smaller than 0.5.
We perform an experiment that quantifies this
effect on a larger scale by assessing the frequency
and the extent of such divergences. 对此
结尾, we parse 1,368 development sentences from
an AMR corpus (LDC2017T10) with an AMR
) and evaluate
parser (obtaining graph bank
it against another graph bank
(gold graphs or
another parser-output). We quantify the symmetry
violation by the symmetry violation ratio (Eq. 4)
and the mean symmetry violation (Eq. 5) 给定
some metric m:

A

svr =

|A|i=1 I[米(

我,
A

= m(

我,

我)]

A

msv =

米(

|A|我=1 |

我,

A

我)


|A|
我)


|A|

米(

我,

我)

A

|

-

(4)

(5)

526

GPLA
CAMR
JAMR

GPLA
GPLA
JAMR





金子
金子
金子
JAMR
CAMR
CAMR

平均.

2.7
7.8
5.0
4.2
7.4
7.9

5.8

81.8
92.8
87.0
86.0
93.4
91.6

88.8

0.1
0.2
0.1
0.1
0.1
0.2

0.1

3.2
3.1
3.2
3.0
3.4
3.3

3.2

桌子 1: svr (Eq. 4), msv (Eq. 5) of AMR metrics.

BLEU symmetry violation, 公吨

数据: newstest2018

worst-case
avg-case

(

) svr (%, ∆ > 0.0001) msv (in points)
·

81.3
72.7

0.2
0.2

桌子 2: svr (Eq. 4), msv (Eq. 5) of BLEU, 公吨
环境.

We conduct the experiment with three AMR
系统, CAMR (王等人。, 2016), GPLA (Lyu
and Titov, 2018), and JAMR (Flanigan et al.,
2014), and the gold graphs. 而且, to provide
a baseline that allows us to better put the results
into perspective, we also estimate the symmetry
violation of BLEU (SEMBLEU’s MT ancestor) 在
an MT setting. 具体来说, we fetch 16 系统
outputs of the WMT 2018 EN-DE metrics task
(Ma et al., 2018) and calculate BLEU(A,乙) 和
蓝线(乙,A) of each sentence-pair (A,乙) 从
MT system’s output and the reference (using the
same smoothing method as SEMBLEU). As worst-
case/avg.-case, we use the outputs from the team
where BLEU exhibits maximum/median msv.4

桌子 1 shows that more than 80% 的
evaluated AMR graph pairs lead to a symmetry
violation with SEMBLEU (as opposed to less than

4worst: LMU uns.; avg.: LMU NMT (Huck et al., 2017).

D

w
n

A
d
e
d

F
r


H

t
t

p

:
/
/

d

r
e
C
t
.


t
.

e
d

/
t

A
C

/

A
r
t

C
e

p
d

F
/

d


/

.

1
0
1
1
6
2

/
t

A
C
_
A
_
0
0
3
2
9
1
9
2
3
3
1
2

/

/
t

A
C
_
A
_
0
0
3
2
9
p
d

.

F


y
G

e
s
t

t


n
0
9
S
e
p
e


e
r
2
0
2
3

6

D

w
n

A
d
e
d

F
r


H

t
t

p

:
/
/

d

r
e
C
t
.


t
.

e
d

/
t

A
C

/

A
r
t

C
e

p
d

F
/

d


/

.

1
0
1
1
6
2

/
t

A
C
_
A
_
0
0
3
2
9
1
9
2
3
3
1
2

/

/
t

A
C
_
A
_
0
0
3
2
9
p
d

.

F


y
G

e
s
t

t


n
0
9
S
e
p
e


e
r
2
0
2
3

数字 4: Symmetry evaluations of metrics. SEMBLEU (left column) and SMATCH (middle column) and BLEU as
a ‘baseline’ in an MT task setting on newstest2018. SEMBLEU: large divergence, strong outliers. SMATCH: 很少
分歧, few outliers; 蓝线: many small divergences, zero outliers. (A) marks the case in Figure 3.

10% for SMATCH). The msv of SMATCH is consi-
derably smaller compared to SEMBLEU: 0.1 与. 3.2
points F1 score. Even though the BLEU metric is
inherently asymmetric, most of the symmetry
violations are negligible when applied in MT (高的
svr, low msv, 桌子 2). 然而, when applied
to AMR graphs ‘‘via’’ SEMBLEU the asymmetry
is amplified by a factor of approximately 16 (0.2
与. 3.2 点). 数字 4 visualizes the symme-
try violations of SEMBLEU (左边), SMATCH (中间),
and BLEU (正确的). The SEMBLEU-plots show that
the effect is widespread, some cases are extreme,
many others are less extreme but still considerable.
This stands in contrast to SMATCH but also to BLEU,
which itself appears well calibrated and does not
suffer from any major asymmetry.

总共, symmetry violations with SMATCH are
much fewer and less pronounced than those
observed with SEMBLEU. 理论上, SMATCH is
fully symmetric, 然而, violations can occur
due to alignment errors from the greedy variable-
alignment search (we discuss this issue in the next
paragraph). 相比之下, the symmetry violation
of SEMBLEU is intrinsic to the method because
the underlying overlap measure BLEU is inherently
不对称的, 然而, this asymmetry is amplified
in SEMBLEU compared to BLEU.5

IV. Determinacy This principle states that
repeated calculations of a metric should yield iden-

1

2

# restarts
3

5

7

corpus vs. 语料库
graph vs. 图形

2.6e−
1.3e−

4

3

1.7e−
1.0e−

4

3

8.1e−
8.5e−

5

4

5.7e−
5.3e−

5

4

5.6e−
4.0e−

5

4

桌子 3: Expected determinacy error ǫ in SMATCH
F1.

tical results. Because there is no randomness in
SEMBLEU, it fully complies with this principle.
The reference implementation of SMATCH does not
fully guarantee deterministic variable alignment
结果, because it aligns the variables by means of
greedy hill-climbing. 然而, multiple random
initializations together with the small set of AMR
variables imply that the deviation will be
ǫ
(a small number close to 0).6 表中 3 我们
measure the expected ǫ: it displays the SMATCH F1
standard deviation with respect to 10 独立的
runs, on a corpus level and on a graph-pair level
(arithmetic mean).7 We see that ǫ is small, 甚至
when only one random start is performed (语料库
等级: ǫ = 0.0003, graph level: ǫ = 0.0013).
We conclude that the hill-climbing in SMATCH is
unlikely to have any significant effects on the final
分数.

V. No bias A similarity metric of (A)MRs
should not unjustifiably or unintentionally favor

6此外, ǫ = 0 is guaranteed when resorting to a

5As we show below (principle V), this is due to the way in
which k-grams are extracted from variable-free AMR graphs.

(昂贵) ILP calculation (Cai and Knight, 2013).

7数据: dev set of LDC2017T10, parses by GPLA.

527

SEMBLEU
SMATCH

(3d)
(d)


√√√

(d2 + d)
(d)

(d2 + 2d)
(d)

桌子 4: Error impact depending on error location
in a tree with node degree d.

数字 5: 左边: In April, a woman rides a car from
Rome to Pisa. root nodes A: travel-01 与. 乙: drive-01.
正确的: In Apr., a sailor travels with a ship from P.
to N.

the correctness or penalize errors pertaining to
任何 (sub-)structures of the graphs. 然而, 我们
find that SEMBLEU is affected by a bias that affects
(一些) leaf nodes attached to high-degree nodes.
The bias arises from two related factors: (我) 什么时候
transforming G to Gvf , SEMBLEU replaces variable
nodes with concept nodes. 因此, nodes that were
leaf nodes in G can be raised to highly connected
nodes in Gvf . (二) breadth-first k-gram extraction
starts from the root node. During graph traversal,
concept leaves—now occupying the position of
(以前的) variable nodes with a high number of
外向的 (and incoming) edges—will be visited
and extracted more frequently than others.

The two factors in combination make SEMBLEU
penalize a wrong concept node harshly when it
is attached to a high-degree variable node (这
leaf is raised to high-degree when transforming G
to Gvf ). 反过来, correct or wrongly assigned
concepts attached to nodes with low degree are
only weakly considered.8 For example, consider
数字 5. SEMBLEU considers two graphs that
express quite distinct meanings (左和右) 作为
more similar than graphs that are almost equivalent
in meaning (左边, variant A vs. 乙). 这是因为
the leaf that is attached to the root is raised to a
highly connected node in Gvf and thus is over-
frequently contained in the extracted k-grams,
whereas the other leaves will remain leaves in
Gvf .

Analyzing and quantifying SEMBLEU’s bias To
better understand the bias, we study three limiting

8This may have severe consequences, 例如, for negation,
since negation always occurs as a leaf in G and Gvf .
所以, SEMBLEU, by-design, is benevolent to polarity
错误.

528

数字 6: # of k-grams entered by a node in SEMBLEU.

案例: (我) the root is wrong (√√√ ) (二) d leaf nodes are
wrong (
) 和 (三、) one branching node is wrong
( ). Depending on a specific node and its position
in the graph, we would like to know onto how
many k-grams (SEMBLEU) or triples (SMATCH) 这
errors are projected. For the sake of simplicity,
we assume that the graph always comes in its
simplified form Gvf , that it is a tree, 然后
every non-leaf node has the same out-degree d.

The result of our analysis is given in Table 49
and exemplified in Figure 6. Both show that the
number of times k-gram extraction visits a node
heavily depends on its position and that with
growing d, the bias gets amplified (桌子 4).10
例如, when d = 3, 3 wrong leaves
yield 9 wrong k-grams, 和 1 wrong branching
node can already yield 18 wrong k-grams. 经过
对比, in SMATCH the weight of d leaves always
approximates the weight of 1 branching node of
degree d.

总共, in SMATCH the impact of a wrong node
is constant for all node types and rises linearly
with d. In SEMBLEU the impact of a node rises
approximately quadratically with d and it also
depends on the node type, because it raises some
(but not all) leaves in G to connected nodes in
Gvf .

9Proof sketch, SMATCH, d leaves: d triples, a root: d triples,
a branching node: d+1 triples. SEMBLEU
, d leaves: 3d
k-grams (d tri, d bi, d uni). A root: d2 tri, d bi, 1 大学. A
branching node: d2+d+1 tri, d+1 bi, 1 大学.

wk=1/3
k=3

10Consider that in AMR, d can be quite high, 例如, A
predicate with multiple arguments and additional modifiers.

D

w
n

A
d
e
d

F
r


H

t
t

p

:
/
/

d

r
e
C
t
.


t
.

e
d

/
t

A
C

/

A
r
t

C
e

p
d

F
/

d


/

.

1
0
1
1
6
2

/
t

A
C
_
A
_
0
0
3
2
9
1
9
2
3
3
1
2

/

/
t

A
C
_
A
_
0
0
3
2
9
p
d

.

F


y
G

e
s
t

t


n
0
9
S
e
p
e


e
r
2
0
2
3

Eliminating biases A possible approach to
reduce SEMBLEU’s biases could be to weigh the
extracted k-gram matches according to the degree
of the contained nodes. 然而, this would imply
that we assume some k-grams (and thus also some
nodes and edges) to be of greater importance than
others—in other words, we would eliminate one
bias by introducing another. Because the breadth-
first traversal is the metric’s backbone, this issue
may be hard to address well. When BLEU is used
for MT evaluation, there is no such bias because
the k-grams in a sentence appear linearly.

六、. Graph matching: Symbolic perspective
This principle requires that a metric’s score grows
with increasing overlap of the conditions that are
simultaneously contained in A and B. SMATCH
fulfills this principle since it matches two AMR
graphs inexactly (Yan et al., 2016; Riesen et al.,
2010) by aligning variables such that the triple
matches are maximized. 因此, SMATCH can be
seen as a graph-matching algorithm that works
on any pair of graphs that contain (一些) 节点
that are variables. It fulfills the Jaccard-based
overlap objective, which symmetrically measures
the amount of triples on which two graphs agree,
normalized by their respective sizes (since SMATCH
F1 = 2J/(1 + J) is a monotonic relation).

Because SEMBLEU does not satisfy principles
II and III (id. of indescernibles and symmetry),
it is a corollary that it cannot fulfill the overlap
objective.11 Generally, SEMBLEU does not com-
pare and match two AMR graphs per se, 反而
it matches the results of a graph-to-bag-of-paths
2.2) and the input may not
projection function (
be recoverable from the output (surjective-only).
因此, matching the outputs of this function cannot
be equated to matching the inputs on a graph-level.

§

4 Towards a More Semantic Metric for

Semantic Graphs: S2

MATCH

This section focuses on principle VII, semantically
graded graph matching, a principle that none of
the AMR metrics considered so far satisfies. A

11Proof by symmetry violation:

A, 乙: metric(A, 乙) > metric(乙, A)
w.l.o.g.
→ (西德:18) , since f (A, 乙) =
> f (乙, A)
t(A)
t(A)
t(乙)
|
|

indiscernibles:
w.l.o.g.
F (A, 乙)/z(A, 乙) = 1 > f (A, C)/z(A, C)(西德:18)

= f (乙, A)

F (A, 乙)
=
t(乙)
|
/// Proof by identity of


A, 乙, C: metric(A, 乙) = metric(A, C) = 1

|

529

数字 7: Three different AMR graphs representing
The cat sprints; The kitten runs; The giraffe sleeps and
pairwise similarity scores from SEMBLEU, SMATCH, 和
S2MATCH (看 (

4) for S2Match).
§

fulfilment of this principle also increases the capa-
city of a metric to assess the semantic similarity
of two AMR graphs from different sentences.
例如, when clustering AMR graphs or
detecting paraphrases in AMR-parsed texts, 这
ability to abstract away from concrete lexica-
lizations is clearly desirable. Consider Figure 7,
with three different graphs. Two of them (A, 乙)
are similar in meaning and differ significantly
from C. 然而, both SMATCH and SEMBLEU yield
the same result in the sense that metric(A, 乙) =
metric(A, C). Put differently, neither metric
takes into account that giraffe and kitten are two
quite different concepts, while cat and kitten are
more similar. 然而, we would like this to be
reflected by our metric and obtain metric (A,
乙) > metric(A, C) in such a case.

MATCH We propose the S2

S2
MATCH metric (Soft
Semantic match, pronounced: [estu:mætS ℄) 那
in one
builds on SMATCH but differs from it
important aspect:
Instead of maximizing the
number of (难的) triple matches between two
graphs during alignment search, we maximize
这 (soft) triple matches by taking into account
the semantic similarity of concepts. Recall that
an AMR graph contains two types of triples:
instance and relation triples (例如, 数字 7, 左边:
). In SMATCH,
A, 实例, 猫
H
H

two triples can only be matched if they are
identical. In S2
MATCH, we relax this constraint,
which has also the potential to yield a different,
and possibly, a better variable alignment. 更多的
precisely, in SMATCH we match two instance triples

C, 精氨酸 0, A

D

w
n

A
d
e
d

F
r


H

t
t

p

:
/
/

d

r
e
C
t
.


t
.

e
d

/
t

A
C

/

A
r
t

C
e

p
d

F
/

d


/

.

1
0
1
1
6
2

/
t

A
C
_
A
_
0
0
3
2
9
1
9
2
3
3
1
2

/

/
t

A
C
_
A
_
0
0
3
2
9
p
d

.

F


y
G

e
s
t

t


n
0
9
S
e
p
e


e
r
2
0
2
3

平均. msv (Eq. 5)

1 restart

determinacy error
2 restarts

4 restarts

SMATCH
S2
MATCH

relative change

0.0011
0.0005

54.6%

-

1.3e−
9.0e−

3

4

1.0e−
6.1e−

3

4

5.3e−
2.1e−

4

4

30.7%

-

39.0%

-

60.3%

-

桌子 5: S2
reducing the extent of its non-determinacy.

MATCH improves upon SMATCH by

following pilot experiments, we use cosine (Eq. 8)
and τ = 0.5 over 100-dimensional GloVe vectors
(Pennington et al., 2014).
总结一下, S2

MATCH is designed to either
yield the same score as SMATCH—or a slightly
increased score when it aligns concepts that are
symbolically distinct but semantically similar.
An example, from parser evaluation, 显示在
数字 8. 这里, S2
MATCH increases the score to
63 F1 (+10 点) by detecting a more adequate
alignment that accounts for the graded similarity
of two related AMR concepts pairs. 我们相信
that this is justified: The two graphs are very
similar and an F1 of 53 is too low, doing the
parser injustice.

On a technical note, the changes in alignments
also have the outcome that S2
MATCH mends some
of SMATCH’s flaws: It better addresses principles
III and IV, reducing the symmetry violation and
determinacy error (桌子 5).

Qualitative study: Probing S2
MATCH’s choices
This study randomly samples 100 graph pairs from
those where S2
MATCH assigned higher scores than
SMATCH.12 Two annotators were asked to judge the
similarity of all aligned concepts with similarity
分数 <1.0: Are the concepts dissimilar, similar, or extremely similar? For concepts judged dissimilar, we conclude that S2 MATCH erroneously increased the score; if judged as (extremely) similar, we conclude that the decision was justified. We calculate three agreement statistics that all show large consensus among our annotators (Cohen’s kappa: 0.79, squared kappa: 0.87, Pearson’s ρ: 0.91) According to the annotations, the decision to increase the score is mostly justified: in 56% and 12% of cases both annotators voted that the newly aligned concepts are extremely similar and similar, respectively, while the agreed dissimilar label makes up 25% of cases. 12Automatic graphs by GPLA, on LDC2017T10, dev set. 530 Figure 8: ‘6 Abu Sayyaf suspects were captured last week in a raid in Metro Manila.’ gold (top) vs. parsed AMR (bottom). SMATCH aligns criminal-organization to city (red); S2 MATCH aligns criminal-organization to suspect-01, city to country-region (blue). a, instance, x h B as follows: i ∈ A and h map(a), instance, y hardM atch = I[x = y] i ∈ (6) where I(c) equals 1 if c is true and 0 otherwise. S2 MATCH relaxes this condition: sof tM atch = 1 d(x, y), − (7) X where d is an arbitrary distance function d : [0, 1]. For example, in practice, if X n , we represent the concepts as vectors x, y we can use ∈ R → × d(x, y) = min 1, 1 (cid:26) yT x y k2 k − x k k2 (cid:27) . (8) When plugged into Eq. 7, this results in the cosine similarity [0, 1]. It may be suitable to set a threshold τ (e.g., τ = 0.5), to only consider the similarity between two concepts if it is above d(x, y) < τ ). In the τ (sof tM atch = 0 if 1 ∈ − l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / t a c l / l a r t i c e - p d f / d o i / . 1 0 1 1 6 2 / t l a c _ a _ 0 0 3 2 9 1 9 2 3 3 1 2 / / t l a c _ a _ 0 0 3 2 9 p d . f b y g u e s t t o n 0 9 S e p e m b e r 2 0 2 3 input span region (excerpt) amr region gold (excerpt) amr region parser (excerpt) cos points F1 ↑ 40 km southwest of :quant 40:unit ( k2 / kilometer ) ( k22 / km :unit-of (d23 / distance-quantity improving agricultural prod. (i2 / improve-01 . . . :mod ( f2 / farming ) (i31 / improve-01:mod ( a23 / agriculture ) other deadly bacteria op3 ( b / bacterium . . . :mod (o / other))) op3 ( b13 / bacteria :ARG0-of:mod (o12 / other))) drug and law enforcement aid (a / and:op2 ( a3 / aid-01 :ARG1 (a9 / and:op1 ( d8 / drug ) :op2 (l10 / law))) Get a lawyer and get a divorce. :op1 (g / get-01:mode imp. :ARG0 ( y / you ) :op1 ( g0 / get-01 :ARG1 (l2 / lawyer):mode imp.) The unusual development. ARG0 (d / develop-01:mod ( u / usual :polarity -)) :ARG0 (d1 / develop-02:mod ( u0 / unusual )) 0.72 0.73 0.80 0.67 0.80 0.60 annotation ex. similar ex. similar ex. similar similar dissimilar 1.2 3.0 5.1 1.8 4.8 14.0 dissimilar Table 6: Examples where S2 MATCH assigns a higher score, accounting for the similarity of aligned concepts . Table 6 lists examples of good or ill-founded score increases. We observe, for example, that S2 MATCH accounts for the similarity of two con- cepts of different number: bacterium (gold) vs. bacteria (parser) (line 3). It also captures abbre- viations (km – kilometer) and closely related concepts (farming – agriculture). SEMBLEU and SMATCH would penalize the corresponding triples in exactly the same way as predicting a truly dissimilar concept. An interesting case is seen in line 7. Here, usual and unusual are correctly annotated as dissimilar, since they are opposite concepts. S2 MATCH, equipped with GloVe embeddings, measures a cosine of 0.6, above the chosen threshold, which results in an increase of the score by 14 points (the increase is large as these two graphs are tiny). It is well known that synonyms and antonyms are difficult to distinguish with distributional word representations, because they often share similar contexts. However, the case at hand is orthogonal to this problem: usual in ’, the gold graph is modified with the polarity ‘ whereas the predicted graph assigned the (non- negated) opposite concept unusual. Hence, given the context in the gold graph, the prediction is semantically almost equivalent. This points to an aspect of principle VII that is not yet covered by S2 MATCH: It assesses graded similarity at the lexical, but not at the phrasal level, and hence cannot account for compositional phenomena. In future work, we aim to alleviate this issue by developing extensions that measure semantic similarity for larger graph contexts, in order to fully satisfy all seven principles.13 − Quantitative study: Metrics vs. human raters This study investigates to what extent the judg- ments of the three metrics under discussion resem- ble human judgements, based on the following two expectations. First, the more a human rates 13As we have seen, this requires much care. We therefore consider this next step to be out of scope of the present paper. two sentences to be semantically similar in their meaning, the higher the metric should rate the cor- responding AMR graphs (meaning similarity). Second, the more a human rates two sentences to be related in their meaning (maximum: equi- valence), the higher the score of our metric of the corresponding AMR graphs should tend to be (meaning relatedness). Albeit not the exact same (Budanitsky and Hirst, 2006), the tasks are closely related and both in conjunction should allow us to better assess the performance of our AMR metrics. As ground truth for the meaning similarity rating task we use test data of the Semantic Textual Similarity (STS) shared task (Cer et al., 2017), with 1,379 sentence pairs annotated for meaning similarity. For the meaning-relatedness task we use SICK (Marelli et al., 2014) with 9,840 sentence pairs that have been additionally annotated for semantic relatedness.14 We proceed as follows: We normalize the human ratings to [0,1]. Then we apply GPLA to parse the sen- tence tuples (si, s′i), obtaining tuples (parse(si), parse(s′i)) and score the graph pairs with the metrics: SMATCH(i), S2 MATCH(i), SEMBLEU(i), and H(i), where H(i) is the human score. For both tasks SMATCH and S2 MATCH yield better or equal correlations with human raters than SEMBLEU (Table 7). When considering the RMS error metric(i))2. the difference 1 n− n i=1(H(i) is even more pronounced. p − P This deviation in the absolute scores is also reflected by the score density distributions plotted in Figure 9: SEMBLEU underrates a good proportion of graph pairs whose input sentences were rated as highly semantically similar or related by humans. This may well relate to the biases of different node MATCH appears to provide types (cf. 3). Overall, S2 § 14An example from SICK. Max. score: A man is cooking pancakes–The man is cooking pancakes. Min. score: Two girls are playing outdoors near a woman.–The elephant is being ridden by the man. To further enhance the soundness of the SICK experiment we discard pairs with a contradiction relation and retain 8,416 pairs with neutral or entailment. 531 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / t a c l / l a r t i c e - p d f / d o i / . 1 0 1 1 6 2 / t l a c _ a _ 0 0 3 2 9 1 9 2 3 3 1 2 / / t l a c _ a _ 0 0 3 2 9 p d . f b y g u e s t t o n 0 9 S e p e m b e r 2 0 2 3 RMSE RMSE (quant) Pearson’s ρ Spearman’s ρ task SB SM STS 0.34 SICK 0.38 0.25 0.25 2 S M 0.25 0.24 SB SM 0.25 0.32 0.11 0.14 2 S M 0.10 0.13 SB SM 0.52 0.62 0.55 0.64 2 S M 0.55 0.64 SB SM 0.51 0.66 0.53 0.66 2 S M 0.53 0.66 Table 7: RMSE (lower is better) and correlation results of our metrics in our STS and SICK investigations. RMSE (quant): RMSE on empirical quantile distribution with quantiles 0.1,0.2,. . . ,0.9. Figure 9: Sentence meaning similarity distributions. a better fit with the score-distribution of the human rater when measuring semantic similarity and relatedness, the latter being notably closer to the human reference in some regions than the otherwise similar SMATCH. A concrete example from the STS data is given in Figure 10. Here, S2 MATCH detects the similarity between the abstract anaphors it and this and assigns a score that better reflects the human score compared to SMATCH and SEMBLEU, the latter being far too low. However, in total, we conclude that S2 MATCH’s improvements seem rather small and no metric is perfectly aligned with human scores, possibly because gradedness of semantic similarity that arises in combination with constructional variation is not yet captured—more research is needed to extend S2 MATCH’s scope to such cases. 5 Metrics’ effects on parser evaluation We have seen that different metrics can assign different scores to the same pair of graphs. We now want to assess to what extent this affects rankings: Does one metric rank a graph higher or lower than the other? And can this affect the ranking of parsers on benchmark datasets? Quantitative study: Graph rankings In this experiment, we assess whether our metrics rank graphs differently. We use LDC2017T10 (dev) parses by CAMR [c1. . .cn], JAMR [j1. . .jn] and gold graphs [y1. . .yn]. Given metrics we obtain results (c1, y1) . . . and analogously and G F (cn, yn)] F J . We calculate two F C and C := [ F J , F G G l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / t a c l / l a r t i c e - p d f / d o i / . 1 0 1 1 6 2 / t l a c _ a _ 0 0 3 2 9 1 9 2 3 3 1 2 / / t l a c _ a _ 0 0 3 2 9 p d . f b y g u e s t t o n 0 9 S e p e m b e r 2 0 2 3 Figure 10: An example from STS, where S2MATCH yields a score that better reflects the human judgement, due to detecting a similarity between the abstract anaphora it and this . · J ( G F C i ) J i − F statistics: (i) the ratio of cases i where the metrics differ in their preference for one parse over the J C other ( i ) < 0, and, to assess i − G significance, (ii) a t-test for paired samples on the differences assigned by the metrics between the C and C . Table 8 shows that J parsers: −F F SMATCH and S2 MATCH both differ (significantly) from SEMBLEU in 15% – 20% of cases. SMATCH and S2 MATCH differ on individual rankings in appr. 4% of cases. Furthermore, we note a considerable amount of cases (8.1%) where SEMBLEU disagrees with itself in the preference for one parse over the other.15 −G G The differing preferences of S2 MATCH for either candidate parse can be the outcome of small divergences due to the alignment search or because S2 MATCH accounts for the lexical similarity of concepts, perhaps supported by a new variable alignment. Figure 11 shows two examples where S2 MATCH prefers a different candidate parse compared to SMATCH. In the first example (Figure 11a), S2 MATCH prefers the parse 15That is, SB(A,G)>SB(乙,G) albeit SB(G,A)
AMR Similarity Metrics from Principles image
AMR Similarity Metrics from Principles image

下载pdf