FORSCHUNGSBERICHT

FORSCHUNGSBERICHT

Bi-GRU Relation Extraction Model Based on Keywords
Attention

Yuanyuan Zhang1†, Yu Chen2, Shengkang Yu1, Xiaoqin Gu1, Mengqiong Song1, Yu Peng1, Jianxia Chen2
& Qi Liu2

1Technical Training Center of State Grid Hubei Electric Power Co., Ltd. Wuhan 430070, China

2Hubei University of Technology, School of Computer Science, Wuhan 430068, China

Schlüsselwörter: Relation extraction; Bi-GRU; CRF keywords attention; Hidden similarity

Zitat: Zhang, Y.Y. et al.: Bi-GRU Relation Extraction Model Based on Keywords Attention. Datenintelligenz 4(3), 552-572

(2022). DOI: 10.1162/dint_a_00147

Receive: Oct. 11, 2021; Überarbeitet: Jan. 15, 2022; Akzeptiert: Feb. 10, 2022

ABSTRAKT

Relational extraction plays an important role in the field of natural language processing to predict semantic
relationships between entities in a sentence. Currently, most models have typically utilized the natural
language processing tools to capture high-level features with an attention mechanism to mitigate the adverse
effects of noise in sentences for the prediction results. Jedoch, in the task of relational classification, diese
attention mechanisms do not take full advantage of the semantic information of some keywords which have
information on relational expressions in the sentences. daher, we propose a novel relation extraction model
based on the attention mechanism with keywords, named Relation Extraction Based on Keywords Attention (REKA).
Insbesondere, the proposed model makes use of bi-directional GRU (Bi-GRU) to reduce computation, obtain the
representation of sentences , and extracts prior knowledge of entity pair without any NLP tools. Besides the calculation
of the entity-pair similarity, Keywords attention in the REKA model also utilizes a linear-chain conditional random
field (CRF) combining entity-pair features, similarity features between entity-pair features, and its hidden
vectors, to obtain the attention weight resulting from the marginal distribution of each word. Experimente
demonstrate that the proposed approach can utilize keywords incorporating relational expression semantics
in sentences without the assistance of any high-level features and achieve better performance than traditional
Methoden.

Korrespondierender Autor: Yuanyuan Zhang (Email: 16823650@qq.com; ORCID: 0000-0002-5353-2989).

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u
D
N

/

ich

T
/

l

A
R
T
ich
C
e

P
D

F
/

/

/

/

4
3
5
5
2
2
0
3
8
4
1
2
D
N
_
A
_
0
0
1
4
7
P
D

T

/

.

ich

F

B
j
G
u
e
S
T

T

Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3

© 2022 Chinesische Akademie der Wissenschaft. Veröffentlicht unter einer Creative Commons Namensnennung 4.0 International (CC BY 4.0) Lizenz.

Bi-GRU Relation Extraction Model Based on Keywords Attention

1. EINFÜHRUNG

Abundant data on the Web are generated and shared every day, thus the relational facts of subjects
(entities) in the text are often utilized to represent the text information to capture associations among those
Daten. Generally, triples are utilized to represent entities and their relations which often indicate unambiguous
facts about entities. Zum Beispiel, a triple (e1, R, e2) denotes that entity e1 has a relation r with another entity
e2. Knowledge graphs (KG) such as FreeBase [1] and DBpedia [2] are real examples of such representations
in the triple form.

Relation extraction is a sub-task of natural language processing (NLP) that can discover relations between
entity pairs and given unstructured text data. Previous work in the area of relation extraction from text
heavily depends on kernel and feature methods [3]. Recent research studies utilize data-driven Deep Neural
Netzwerke (DNNs) methods to eliminate RE of the conventional NLP approaches since these DNN-based
Methoden [4–6] can automatically learn features instead of manually designed features based on the various
NLP tool-kits. Most of them surpassed the traditional methods and achieved excellent results for the RE
tasks. Among them, both DNNs-based supervised and distant supervision methods are the most popular
and reliable solutions for RE but have their own characteristics. Supervised methods have better performance
for the specific domain, while distant supervision methods have better performance for generic domains.
daher, it is difficult to specify which kind of the above two methods are the best. Somit, the following
part introduces the DNN-based supervised methods in detail according to the research of the paper.

According to the structure of DNNs, DNN-based Supervised RE usually is classified into various types
such as CNN [6–10], RNN [5, 11, 12], or Mix structure. Zusätzlich, some variant RNN networks have
been developed in RE systems such as the Long Short Term Memory network ( LSTM ) [13-15], and Gated
Recurrent Unit ( GRU ) [16]. Each kind of DNN has its own characteristics and advantages in dealing with
various language tasks. Zum Beispiel, due to the parallel processing ability, the CNNs are good at addressing
local and structural information, but rarely capture global features and time sequence information. Stattdessen,
RNNs, LSMTs, and GRUs, which are suitable for modeling sequence and problem transformation, can
alleviate these problems that CNNs cannot overcome.

Jedoch, these structural RNNs-based methods have a common drawback which is that many external
artificial features are introduced without an effective feature filter mechanism [17]. daher, the semantic-
oriented approaches are utilized to improve the ability of semantic representation via capturing the internal
association of text and the attention mechanisms. To alleviate the influence of word-level noise within
Sätze, many efforts have been devoted to getting rid of irrelevant words [18–21], especially, the recent
state-of-the-art attention-based methods such as [19, 22, 23].

Although the inner-sentence noise can be alleviated by the attention mechanisms with the caculation of
weights for the each word independently, there are some information for better extraction through some
continuous words such as phrases. Yu et al.[24] proposes an attention mechanism based on the conditional
random fields (CRF), which incorporates such keywords information into the neural relation extractor.

Datenintelligenz

553

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u
D
N

/

ich

T
/

l

A
R
T
ich
C
e

P
D

F
/

/

/

/

4
3
5
5
2
2
0
3
8
4
1
2
D
N
_
A
_
0
0
1
4
7
P
D

T

.

/

ich

F

B
j
G
u
e
S
T

T

Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3

Bi-GRU Relation Extraction Model Based on Keywords Attention

Compared with other strong feature-based classifiers and all baseline neural models, the CRF mechanism
is important for this model to construct a better attention weight.

Based on the above analysis, we propose a novel relation extraction model based on the attention
mechanism with keywords, named Relation Extraction Based on Keywords Attention (REKA) , welche
incorporates an attention mechanism based on the keywords-identifiable of relation that is similar to the
segments in the [24]. Different from the model in [24], our model makes use of bi-directional GRU (Bi-
GRU) to reduce computation without any NLP tools. Insbesondere, the CRF attention mechanism includes
two components: entity pair attention and segment attention.

The proposed entity pair attention means adding additional weight to the entity part of the dataset so
that it plays a more decisive role when entering the code. The proposed segment attention is assumed that
each sentence has a binary sequence of states corresponding to it and that each state variable in the
sequence corresponds to a word in the sentence. This binary state variable indicates whether the
corresponding word is related to the relation extraction task with 0 Und 1, jeweils. Inspired by the [24],
we utilized a linear-chain CRF incorporating segment attention to obtain the marginal distribution of each
state variable as an attention weight.

To summarize, the contributions of the proposed REKA model are shown as follows:

• Propose a novel Bi-GRU model based on an attention mechanism with keywords to handle the

relation extraction.

• Both entity pair similarity features and segment features are incorporated in the proposed attention

mechanism with keywords.

• Achieves state-of-the-art performance without any other NLP tools assistance.
• Be more interpretable than the original Bi-GRU model.

2. RELATED WORK

2.1 RNN-Based Relation Extraction Models

Kürzlich, relation extraction research focuses on extracting relational features with neural networks[25–27].
Zhang et al. [28] claimed that RNN-based relation extraction models have better performance than that
the CNN-based models since CNN’s can only obtain the local features, but RNNs are good at learning
long-distance dependency between entities. Nachher, LSTM [15] is proposed by using the gate mechanism
to solve the problem of gradient explosion in RNN models. Based on this, Xu et al. [5] propose a model
with LSTM via the shortest dependency path (SDP) between entities, named the SDP-LSTM model, in which
there are four types of information, including Word vectors, POS tags, Grammatical relations, and WordNet
hypernyms, to support external information. To address the problem of shallow architecture difficultly
represented by the potential space in different network levels, Xu et al. [29] can obtain the abstract features
along the two sub-paths of SDP.

554

Datenintelligenz

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u
D
N

/

ich

T
/

l

A
R
T
ich
C
e

P
D

F
/

/

/

/

4
3
5
5
2
2
0
3
8
4
1
2
D
N
_
A
_
0
0
1
4
7
P
D

T

.

/

ich

F

B
j
G
u
e
S
T

T

Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3

Bi-GRU Relation Extraction Model Based on Keywords Attention

Since dependency trees are directed graphs, it is necessary to identify whether the relation implies the
reverse direction or the first entity is related to the second entity. daher, the SPD is divided into two
sub-paths, each directed from the entity towards the ancestor node. Jedoch, one-directional LSTM models
lack representation of the complete sequential information. Daher, the bidirectional LSTM model (BiLSTM)
is utilized by Zhang et al. [30] to obtain the sentence level representation with several lexical features. Der
experimental results demonstrate word embedding as an input feature alone is enough to achieve excellent
results. Jedoch, the SDP can filter the input text but has no extracted features. To address this issue, Die
attention mechanism is introduced for BiLSTM-based RE[31].

2.2 Attention Mechanisms for Relation Extraction

Since useful information can be presented anywhere in the sentence, some researchers recently have

presented attention-based models which can obtain the important semantic information in a sentence.

Zhou et al. [31] propose the attention mechanism in BiLSTM, which automatically got the important
features only with the raw text. Similar to the work of Zhou et al. [31], Xiao et al. [32] propose a two-level
BiLSTM architecture based on a two-level attention mechanism to extract a high-level representation of the
raw sentence.

Although the attention mechanism is used to capture the important features extracted by the model, [31]
just presents a random weight without the consideration of prior knowledge. daher, EAtt-BiGRU
proposed by Qin et al. [33] leverages the entity pair as prior knowledge to form attention weight. Different
from Zhou et al.’s [31] arbeiten, EAtt-BiGRU applies bi-directional GRU (Bi-GRU) to reduce computation,
capture the representation of sentences and adopt a GRU to extract prior knowledge of entity pairs. Zhang
et al. [34] propose a Bi-GRU model based on another attention mechanism with the SDP for the prior
Wissen, extracting sentence-level features and attention weights. Nguyen et al. [35] have proposed to
use a special attention mechanism and introduced dependency analysis that takes into account the
interconnections between potential features.

With the proposed BERT model, which has achieved excellent performance on various NLP tasks, mehr
and more studies have started to try to use the BERT model in search matching tasks and achieved very
good results. In the latest study on pre-trained models, Wei et al. [36] achieved high metric scores using
BERT. Although the BERT model has excellent encoding ability and can fully capture the semantic information
of the context in the sentence, it still has problems such as high training costs and long prediction time.

Our model is inspired by Lee et al. [22], but different from the previous works that can only get word-
level or sentence-level attention and rarely obtain the degree of correlation between entities and other
related words, our model utilizes Bi-GRU instead of BiLSTM to reduce computation. Meaning while,
inspired by the attention model designed by Yu et al. [24] for the relation extraction, which is capable of
learning phrase-like features and capturing reasonably related segments as relational expressions based on

Datenintelligenz

555

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u
D
N

/

ich

T
/

l

A
R
T
ich
C
e

P
D

F
/

/

/

/

4
3
5
5
2
2
0
3
8
4
1
2
D
N
_
A
_
0
0
1
4
7
P
D

/

T

.

ich

F

B
j
G
u
e
S
T

T

Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3

Bi-GRU Relation Extraction Model Based on Keywords Attention

the CRF, we propose a novel attention mechanism combining the entity pair attention with the segment
attention via CRF together.

Although the above methods provide a solid foundation for the research of supervised RE, there are still
limitations among them. Zum Beispiel, the insufficient training corpus puzzles the further development of
the supervised RE. daher, Mintz et al. [37] propose a distant supervision approach strongly based on
an assumption in the selection of training examples. Distant supervision methods also achieved excellent
results for the RE [38–40]. Jedoch, it also has some drawbacks, Zum Beispiel, the noise in the data sets is
obvious. Daher, it is difficult to demonstrate which two kinds of above methods are currently the best. Somit,
we just research the supervised methods in this paper.

3. METHODOLOGY

The proposed REKA model consists of four components, the structure of which is shown in Figure 1, Und

the role of each layer is as follows:

• The input layer that contains word vector information and location information.
• The self-attention layer that processes the word vectors to obtain word representations.
• To obtain contextual information about each word in a sentence The Bi-GRU layer is used.
• The keyword-attention layer extracts the key information in the sentence and passes it to the final

classification layer.

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u
D
N

/

ich

T
/

l

A
R
T
ich
C
e

P
D

F
/

/

/

/

4
3
5
5
2
2
0
3
8
4
1
2
D
N
_
A
_
0
0
1
4
7
P
D

/

T

.

ich

F

B
j
G
u
e
S
T

T

Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3

Figur 1. The systematic architecture of the REKA model.

556

Datenintelligenz

Bi-GRU Relation Extraction Model Based on Keywords Attention

3.1 Input Layer

The REKA model’s input layer is designed to transform the original input of the sentence into an embedding
vector containing various feature information, where the input sentences are denoted by {w1, w2, …, wn}
Und {
is a vector of the relative position features information of every word to the entity pair
ej{1,2}.

e
e
p p
,
1
2

p…
,
,

}

e
N

J

J

J

To further enhance the model’s ability to better capture the semantic information in sentences, a pre-
training model of embedded language models (ELMo) [43] word embedding is utilized in this paper, welche
proposes a better solution for multiple meanings of words, unlike the previous work of word2vec by
Mikolov et al. [41] and GloVe by Pennington et al. [42], in which one word corresponds to a vector that
is stationary.

ELMo is a real trained model, in which a sentence or a paragraph is fed into and inferred the word vector
corresponding to each word based on the context. One of the obvious benefits of ELMo is that the multiple-
meaning words can be understood in the context of the preceding and following words.

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u
D
N

/

ich

T
/

l

A
R
T
ich
C
e

P
D

F
/

/

/

/

4
3
5
5
2
2
0
3
8
4
1
2
D
N
_
A
_
0
0
1
4
7
P
D

/

.

T

ich

F

B
j
G
u
e
S
T

T

Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3

After the word embedding process, {x1, x2, …, xn} is the dw dimensional vector and input into the next

layer as the position feature vector.

3.2 Multi-Head Attention Layer

Although this paper makes use of non-fixed word vectors in the input layer, we use the Multi-Headed
Attention (MHA) mechanism to process the output vectors in the input layer to help the model further
understand the deep semantic information in the sentences and to address the problem of long-term
dependencies. MHA is a special kind of self-attention mechanism [17, 19], in which the symmetric similarity
matrix of the sequences can be constructed from a sequence of word vectors resulting from the input layer.

As shown in Figure 2, given a key K, a queries Q, and a value V, the multi-head attention module will

execute the attention h times, the calculation process uses the following equation (1-3):

Figur 2. A sample of Multi-Head Attention [17]

Datenintelligenz

557

Bi-GRU Relation Extraction Model Based on Keywords Attention

MultiHead (

Q K V W
)

,

,

=

M

where head
ich

=

Attention

Kopf ;
1

Concat[
(
K
W Q W K W V
ich

;Kopf
R
)

,

,

Q

V

ich

ich

Attention (

Q K V
,

,

)

=

softmax



(cid:2)

QK
D

w



V

]

(1)

(2)

(3)

M

R

D

×

D

w

w

W

Hier
sc aled dot-product attention calculation when calculated and connected in series, Wi
key and value of ith head, jeweils [17].

is the trainable parameter, WM is the
V is query,
Q, Wi

K, Wi

W
ich

W
ich

W
ich

,

,

,

w

w

w

w

w

w

K

R

D

×
r d

/

V

R

D

×
r d

/

Q

R

D

×
r d

/

The inputs Q, K, V are all equal to the word embedding vector {x1, x2, …, xn} in the multi -head attention[17].
The output of the MHA self-attention is a sequence of features with information about the context of the
input sentences.

3.3 Bi-GRU Network

The Bi-GRU network layer was used to obtain semantic information in sentences about the output
sequence of the MHA self-attentive layer. As shown in Figure 3, GRU optimizes the LSTM by retaining only
two gate operations including a new gate and a reset gate, thus its units, daher, have fewer parameters
and converge faster than LSTM units.

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u
D
N

/

ich

T
/

l

A
R
T
ich
C
e

P
D

F
/

/

/

/

4
3
5
5
2
2
0
3
8
4
1
2
D
N
_
A
_
0
0
1
4
7
P
D

T

/

.

ich

The GRU unit’s processing of mi is represented in this paper for simplicity as GRU(mi). daher, Die

Figur 3. The GRU unit

F

B
j
G
u
e
S
T

T

Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3

equation (4–6) for calculating the contextualized word representation is obtained as follows:
(cid:2)(cid:2)(cid:3)
(cid:2)(cid:2)(cid:2)(cid:2)(cid:2)(cid:2)(cid:3)
h GRU m
T
T
(cid:4)(cid:2)(cid:2)
(cid:4)(cid:2)(cid:2)(cid:2)(cid:2)(cid:2)(cid:2)
(
h GRU m
T
T
(cid:2)(cid:2)(cid:3) (cid:4)(cid:2)(cid:2)
[
]
h h
;
T
T

H
T

=

=

=

)

(

)

(4)

(5)

(6)

558

Datenintelligenz

Bi-GRU Relation Extraction Model Based on Keywords Attention

The input M resulting from the MHA self-attention layer is fed into the Bi-GRU network step by step. To
simultaneous use of past and future feature information at a given time step, we connect the hidden state
R at
H

R with the hidden sta te of the backward GRU network

of the forward GRU network

(cid:2)(cid:2)(cid:3)
h ∈

(cid:2)(cid:2)(cid:3)
h ∈

H

D

D

T

T

each step.

Where dh is used to denote the hidden state of the GRU network unit dimension, {h1, h2, …, hn} is denoted

the hidden state vector of each word, The arrow represents the direction of the GRU unit.

3.4 Keywords Attention based on CRF

Although attention mechanisms have achieved state-of-the-art results in a variety of NLP tasks, most of
them do not fully exploit the keywords information in the sentences. This is because keywords usually refer
to important words for solving relational extraction tasks, and the performance of the models would be
improved if information about these keywords could be exploited.

The goal of the attention mechanism with keywords proposed in this paper is to assign more reasonable
weights to the hidden layer vectors, where attention weights are also a set of linear combinations of scalars.
A more reasonable weight assignment indicates that the model pays more attention to the more important
words in the sentence compared to other words, and all the weights in this attention mechanism with
keywords take values between 0 Und 1.

Jedoch, there is a different approach to the calculation of the weights between the traditional attention
mechanisms and the proposed model. Insbesondere, the proposed model defines a state variable z for each
word in the sentence, it means that the word corresponding to z is irrelevant to the relational classification
of this sentence when z equals 0, and vice versa if z equals 1. Daher, each sentence of the input model has
a corresponding sequence of z. From the above description, the expected value of a hidden state N, Die
probability of its corresponding word, will be selected and calculated as the following equation (7):

N

∑=
ich

(
p zi =

)1
H

H
ich

(7)

In order to calculate the p(zi = 1|H), the CRF is introduced here to calculate the sequence of weights
for the hidden sequence vectors H = {h1, h2, …, hn}, where H represents the input sequence and hi represents
the hidden output of the GRU layer for the i th word in the sentence. CRF provides a calculation of transfer
probabilities for the computation of conditional probabilities in between sequences.

The linear-chain CRF defines a range of conditional probability p(zi = 1|H) given H with the following

definition (8–9):

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u
D
N

/

ich

T
/

l

A
R
T
ich
C
e

P
D

F
/

/

/

/

4
3
5
5
2
2
0
3
8
4
1
2
D
N
_
A
_
0
0
1
4
7
P
D

/

T

.

ich

F

B
j
G
u
e
S
T

T

Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3

P
(

z H

)

Z

(

H

)

z H
,

=

Z

1
H
(

(
∏ c
ψ
) c C

(
= ∑ ∏ c

ψ


z H
,

)

Z


c C

Datenintelligenz

z

)

(8)

(9)

559

Bi-GRU Relation Extraction Model Based on Keywords Attention

Where  is the set of state sequences, Z(H) denotes the normalization constant and ZC is the subset of
z given by individual clique c, j(ZC, H) is the potential function of this clique. It is defined by the following
equation (10):

(
ψ


c C

)

=

cz H
,

N

=

1

ich

(

ψ
1

z
ich

,

H
ich

)

N

1

=

1

ich

(

ψ
2

z z
,
ich

ich

+

1

)

(10)

For feature extraction, the feature extractor makes use of two types of feature functions, the vertex feature
function y1(zi, H), the edge feature function y2(zi, zi+1). y1 represents the mapping of the output h of GRU
to the state variable z, and y2 simulates the transition of two state variables at adjacent time steps. Der
equations for their definitions are shown as the following equation (11–13) jeweils:

(

ψ
1

iz

,

H

)

=

(
W

exp

+

H

F
1

W

E

F
2

+

)

B

=

F
1

[

H

ich

;

1

e
P
ich

;

h t
;
e
2

]
;

F
2

2

=

[

(
z zψ
,

2

ich

ich

)
+ =

1

exp

1

e

;

e
2

h t h t
;
;
1
)

T
z z
,
ich

+

1

ich

(
W

]

2

(11)

(12)

(13)

Where WH and WE are trainable parameters, b is a trainable bias term. They calculate the contextual
information as a feature score for each state variable, which takes advantage of the entity location features
e
ip p as well as keyword features embedded vectors (entity pair hidden similarity features t1, t2, and entity
1
ich
pair features

e
2

h h ).
,e
e
2

1

For the hidden vector output by the words after the Bi-GRU layer, the CRF keyword attention mechanism
performs soft selection by assigning higher weights to the words in the sentence that are more relevant to
the classification. The processing of the sentence by the CRF keywords attention mechanism is shown in
Figur 4, The CRF keyword attention in the figure assigns different weights to each word with an example
sentence “The boy ran into the school cafeteria”. In addition to the two entity words “boy” and “cafeteria”,
“into” in the sentence was also assigned a higher weight relative to the other words, due to the fact that a
is the word associated with the relational classification.

Figur 4. CRF keywords attention mechanism architecture shown with an example sentence “The boy ran into the school
cafeteria”.

560

Datenintelligenz

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u
D
N

/

ich

T
/

l

A
R
T
ich
C
e

P
D

F
/

/

/

/

4
3
5
5
2
2
0
3
8
4
1
2
D
N
_
A
_
0
0
1
4
7
P
D

.

T

/

ich

F

B
j
G
u
e
S
T

T

Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3

Bi-GRU Relation Extraction Model Based on Keywords Attention

Entity position feature: The proposed attention mechanism with keywords in this paper not only obtains

word embedding features but also incorporates position embedding features.

In order to represent contextual information as well as the relative location features of entities

e
p p ,
2
ich
this paper connects them with the output of their corresponding hidden layers hi, as shown by F1 in Equation
j
12. There is a definition such as


,

,e
ich

j

j

j

,

.

1

ctxt

cand

ich

ctxt

cand

N

Positional vectors ar e similar to word embedding in that it transforms a relative positional scalar into a
, where L is the

feature embedding vector by traversing through the embedding matrix
maximum sentence length, dp is the dimension of the position vector.

posW

∈R


L
( 2 1)

pd

×

Entity hidden similarity features: Extracting entity hidden similarity features as entity features are used to
replace the traditional entity feature extraction method in this paper, thus avoiding the use of traditional
NLP tools, and its calculation process is defined as shown in Equation (14-15).

=

J
A
ich

exp


K

=

1

k

exp

(

H
e
(

(cid:2)

)

J


C

ich

(cid:2)

)

J


C

k

H
e

T

{1,2}

J

K

= ∑
J
a c
ich

ich

=

1

ich

(14)

(15)

In this paper, entities are categorized according to their similarity to their hidden vectors.

C

∈R

2 hd

×

K

denotes a potential vector constructed to represent the classes of similar entities, where K is a hyperparameter
representing the number of classes in which entities are classified by their hidden similarity.

The j th entity hidden similarity feature t j is calculated by weighting the similarity of c with the hidden

layer output

jeh based on the j th entity.

Entity features are structured by cascading the hidden states corresponding to the entity locations and

the potential type representation of the entity pair, shown as F2 in Equation (12).

3.5 Classification Layer

To compute the probability p of the output distribution of the state variable, A softmax layer has been

added after the keyword attention layer, which is shown in Equation 16.

p y
(

N

)

=

softmax

(
W N
j

)

+

B
j

(16)

Of which |R| is the number of relationship categories, byR|R| is a biased term, Wy that maps the expected

value of the hidden state N to the feature score of the relational label.

Datenintelligenz

561

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u
D
N

/

ich

T
/

l

A
R
T
ich
C
e

P
D

F
/

/

/

/

4
3
5
5
2
2
0
3
8
4
1
2
D
N
_
A
_
0
0
1
4
7
P
D

/

T

.

ich

F

B
j
G
u
e
S
T

T

Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3

Bi-GRU Relation Extraction Model Based on Keywords Attention

3.6 Training

The proposed keywords attention is calculated concerning the cross-entropy loss of the relation extraction.

This loss function is defined as shown in Equation 17.

|
|
D

′ ∑L
= −

(
p y

log

ich
( )

)
S θ
,

ich
( )

=

1

ich

(17)

Wo |D| is the size of the training data dataset and (S(ich), j(ich)) is the i th sample in the dataset. The AdaDelta

optimizer is utilized to minimize the loss calculation parameter h in this paper.

To prevent overfitting, L2 regularisation is added to the loss function, where l1, l2 are the hyperparameters
of the regularisation. The second regularizer attempts to compel the model to process a small number of
significant words and returns a sparse weight distribution. The resulting objective function L is shown in
Gleichung 18.

L L
=

+

4. EXPERIMENTS

4.1 Dataset and Metric

2
λ θ λ
2
2

+

1

N

(
p z
ich

=

1

H

)

ich

(18)

To evaluate the experiment, we used the SemEval-2010 Task 8 dataset for our experiment, SemEval-2010
Task 8 dataset is a benchmark dataset that is widely used in the field of relationship extraction. The dataset
hat 19 relationship types, including nine directional relationships and others. As shown in Table 1.

Tisch 1. Types of relationships in the dataset and their percentages.

Nummer

Rate

Training

Testen

Training

Testen

454
328
312
292
261

258

233
231
192
156

1410
1003
941
845
717

716

690
634
540
504

17.63
12.54
11.76
10.56
8.96

8.95

8.63
7.92
6.75
6.30

16.71
12.07
11.48
10.75
9.61

9.50

8.58
8.50
7.07
5.74

Typ

Other
Cause-Effect
Component-Whole
Entity-Destination
Product-Producer

Entity-Origin

Member-Collection
Message-Topic
Content-Container
Instrument-Agency

The dataset includes 10717 Sätze, of which 8000 samples were used for training and other 2717
samples for testing. The evaluation metrics used here are the macro averaged F1 score based, which is the
official evaluation metric of the dataset.

562

Datenintelligenz

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u
D
N

/

ich

T
/

l

A
R
T
ich
C
e

P
D

F
/

/

/

/

4
3
5
5
2
2
0
3
8
4
1
2
D
N
_
A
_
0
0
1
4
7
P
D

.

T

/

ich

F

B
j
G
u
e
S
T

T

Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3

Bi-GRU Relation Extraction Model Based on Keywords Attention

4.2 Implementation Details

In diesem Papier, a publicly available pre-trained EMLo model is used to initialize the word embeddings in
the REKA model, and the other weights in the model are initialized randomly using the zero-mean Gaussian
distribution, the relevant hyperparameters are shown in Table 2, The grid search was used for the selection
of regularised coefficient values for l1 and l2 from 0 Zu 0.2.

Hyper-parameter

dropout rate

l1
l2
R
batch size
r1
dr
da
dh
K
dp

Tisch 2. Hyperparameters of our model.

Description

Keyword attention layer
Bi-GRU layer
Word embedding layer
Multi-head attention layer

Regularization coeffi cient

Number of Heads
Size of mini-batch
Initial learning rate
The decay rate of leaming
Size of attention layer
Size of hidden layer
Number of the similar entities’ classes
Size of position embeddings

Wert

0.5
0.6
0.8
0.8

[0, 0.2]

4
50
4
0.5
50
512
4
50

4.3 Comparison Models

The proposed REKA model is to be compared with the below benchmark model.

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u
D
N

/

ich

T
/

l

A
R
T
ich
C
e

P
D

F
/

/

/

/

4
3
5
5
2
2
0
3
8
4
1
2
D
N
_
A
_
0
0
1
4
7
P
D

/

.

T

ich

(1)

(2)

(3)

(4)

(5)

SVM: The SVM [44] is a Non-Neural Model, which achieves top results in the SemEval-2010 task,
but it uses a lot of handcrafted and computationally intensive features such as WordNet, ProBank,
FrameNet, usw.
MV-RNN. The MV-RNN [45] is an SDP-based model, SDP is a semantic structural feature in
Sätze. Models with SDP can be iterated along the shortest dependency path between entities.
CNN. The CNN [4] is an end-to-end model on the SemEval-2010 task, which means that the data
from the input end is directly obtained from the output end. This model builds a convolutional
neural network to learn the feature vector of sentence level.
BiLSTM. The BiLSTM [30] is proposed to obtain sentence-level representations on the SemEval-2010
task with bidirectional long short-term memory networks. It is the classic RNN-based relation
extraction model.
DepNN. The DepNN [46] employs an RNN to model subtrees and a CNN to capture features on
the shortest path in sentences.

F

B
j
G
u
e
S
T

T

Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3

Datenintelligenz

563

Bi-GRU Relation Extraction Model Based on Keywords Attention

(6)

(7)

(8)

(9)

FCM. The FCM [45] decomposes each sentence into sub-structures, then extracts their features
separately and finally merges them into the classification layer.
SDP-LSTM. The SDP-LSTM [5] employs the long short term memory (LSTM) to capture features along
the shortest dependency path (SDP). The model is a convolutional neural network for classification
by ranking and uses a loss function with pairwise rank.
Purely self-attention [47]. Only a self-attentive coding layer was utilized and combined with a
position-aware encoder for relational classification.
CASREL BERT [36]. CASREL BERT presents a cascade binary tagging framework (CASREL) Und
implements a new tagging framework that achieves some performance improvements.

(10) Entity-Aware BERT [48]. The method builds on BERT with structured predictions and an entity-aware

self-attentive layer, achieving excellent performance on the SemEval 2010 Task 8 dataset.

4.4 Experimental Results

To evaluate the proposed models further, we chose the RNN-based model from the above models
for comparison. The Precision-Recall (PR) curves and complexity analysis of the models are shown in
Figur 5.

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u
D
N

/

ich

T
/

l

A
R
T
ich
C
e

P
D

F
/

/

/

/

4
3
5
5
2
2
0
3
8
4
1
2
D
N
_
A
_
0
0
1
4
7
P
D

.

/

T

ich

Figur 5. Precision-Recall curves of different used numbers of datasets (1%, 20%, 100%, jeweils) Und
compared with RNN methods.

The comparison results between the REKA model and other models are shown in Table 3, the average

precisions (AP) of REKA compared with RNN methods are shown in Table 4.

F

B
j
G
u
e
S
T

T

Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3

564

Datenintelligenz

Bi-GRU Relation Extraction Model Based on Keywords Attention

Tisch 3. Comparison of the results of the Semeval-2010 Task 8 test dataset.

Modell

SVM[42]
MV-RNN[43]
CNN[4]

BiLSTM[20]

DepNN[44]
FCM[45]
SDP-LSTM[5]
Purely Self-Attention[47]
CASREL BERT
Entity-Aware BERT[48]
REKA Model

Additional Featuresa

POS, WN, usw.
POS, NER, WN
PE, WN
None,
+ PF, POS, usw.
DEP
SDP, NER
SDP
PE
PE
PE
PE

Notes: A. (Where WN, DEP, SDP, PE are WordNet, dependency features, shortest dependency path, position embeddings, jeweils).

F1

82.3
82.4
82.7
82.7
84.3
83.6
83.0
83.7
83.8
87.5
88.8
84.8

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u
D
N

/

ich

T
/

l

A
R
T
ich
C
e

P
D

F
/

/

/

/

4
3
5
5
2
2
0
3
8
4
1
2
D
N
_
A
_
0
0
1
4
7
P
D

.

T

/

ich

F

B
j
G
u
e
S
T

T

Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3

Tisch 4. Average precision score for our model and compared methods (micro-averaged over all classes).

A

1%
20%
100%

BiLSTM

SDP-LSTM

0.26
0.60
0.73

0.47
0.68
0.70

REKA

0.55
0.76
0.81

Notes: A. (The fi rst columns show how much of testing data has been used. Performance is on the SemEval-2010 task dataset).

The experimental results show that the proposed REKA model is superior to the conventional model with
fewer features but is lower than the Entity-Aware BERT and CASREL BERT. Jedoch, the pre-trained model
file of the BERT is so large that it takes longer to be trained with higher hardware performance requirements.

As shown in Table 5, we conducted ablation experiments on the development dataset in order to explore
the contribution of the various components of the keywords-aware attention mechanism to the experimental
results. We gradually stripped the individual components from the original model, the experimental results
showed that the F1-score decreased by 0.2 when the position embedding component was stripped from
the model. MHA, pre-trained EMLo word embeddings, and entity is hidden similarity features provide F1
scores of 0.5, 1.2, Und 0.8 respectively for the model. Insbesondere, A 2.3% improvement of F1 is a result
of the keywords-aware attention. daher, experimental results demonstrate that these components
contribute to the model in a complementary way rather than working individually and achieve an F1 score
von 84.6 via the combination of all components.

Datenintelligenz

565

Bi-GRU Relation Extraction Model Based on Keywords Attention

Tisch 5. The effect of components on the F1-score of the model.

Modell

Our model
Position embedding
Multi-head attention
Pre-trained EMLo word embeddings
Entity hidden similarity features
Keyword-aware attention

5. CONCLUSION

Dev F1

84.6
84.4
83.9
82.7
81.9
79.6

In this paper, we propose a novel Bi-GRU network model based on an attention mechanism with
keywords for the task of RE on the SemEval-2010 task dataset. This model adequately extracts features that
are available in the dataset through the keyword attention mechanism and achieved F1 score of 84.8
without the use of other NLP tools. To calculate the marginal distribution for each word, we used the
similarity between the output of the hidden vectors by the entity words in the hidden layer and the relative
position feature vectors between the entity words in the CRF keyword attention mechanism, which is
chosen as the attention weight. Our further research will be carried out on attention mechanisms that can
better extract key information from sentences, and we are planning to use this for the identification of
relationships between several entities.

ACKNOWLEDGMENTS

This work is supported by the Science and Technology Project of Hubei Electric Power Co., LTD., Zustand

Grid (149).2020

BEITRÄGE DES AUTORS

Yuanyuan Zhang (Email: 16823650@qq.com, ORCID:0000-0002-5353-2989): has participated in the

proposed model design and writing of the manuscript.

Yu Chen (Email: 1148848330@qq.com, ORCID: 0000-0001-7316-3570): has participated in the coding,

the experiment and analysis, writing the manuscript.

Shengkang Yu (Email: 12052033@qq.com, ORCID:0000-0001-6374-3395): has participated in the part

of the experiment and analysis.

Xiaoqin Gu (Email: 1564785699@qq.com, ORCID: 0000-0001-6308-8474): has participated in the part

of the experiment and analysis.

566

Datenintelligenz

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u
D
N

/

ich

T
/

l

A
R
T
ich
C
e

P
D

F
/

/

/

/

4
3
5
5
2
2
0
3
8
4
1
2
D
N
_
A
_
0
0
1
4
7
P
D

.

T

/

ich

F

B
j
G
u
e
S
T

T

Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3

Bi-GRU Relation Extraction Model Based on Keywords Attention

Mengqiong Song (Email: 297365728@qq.com, ORCID:0000-0002-2816-5670): has participated in the

part of the experiment and analysis.

Yu Peng (Email: 1039079148@qq.com, ORCID:0000-0002-5353-2989): has participated in the revision

of the manuscript.

Jianxia Chen (Email: 1607447166@qq.com, ORCID: 0000-0001-6662-1895): has participated in the

model design, problem analysis, writing and revision of the manuscript.

Qi Liu (Email:260129443@qq.com, ORCID:0000-0003-1066-898X): has participated in the writing and

revision of the manuscript.

VERWEISE

[1] Bollacker, K., Evans, C., Paritosh, P., Sturge, T., & Taylor, J.: Free-base: a collaboratively created graph
database for structuring human knowledge. In: Proceedings of the 2008 ACM SIGMOD international
conference on Management of data, S. 1247–1250 (2008, Juni)

[2] Auer, S., Bizer, C., Kobilarov, G., Lehmann, J., Cyganiak, R., & Ives, Z.: Dbpedia: A nucleus for a web of open

Daten. In The semantic web, S. 722–735 (2007). Springer, Berlin, Heidelberg

[3] Pawar, S., Palshikar, G.K., & Bhattacharyya, P.: Relation extraction: A survey. arXiv preprint arXiv:1712.05191

(2017)

[4] Zeng, D., Liu, K., Lai, S., Zhou, G., & Zhao, J.: Relation classification via convolutional deep neural network.
In: Proceedings of COLING 2014, the 25th international conference on computational linguistics: technical
Papiere, S. 2335–2344 (2014, August)

[5] Xu, Y., Mou, L., Li, G., Chen, Y., Peng, H., & Jin, Z.: Classifying relations via long short term memory networks
along shortest dependency paths. In: Verfahren der 2015 conference on empirical methods in natural
language processing, S. 1785–1794 (2015, September)
Liu, C., Sun, W., Chao, W., & Che, W.: Convolution neural network for relation extraction. In: International
conference on advanced data mining and applications, S. 231–242 (2013, Dezember). Springer, Berlin,
Heidelberg

[6]

[7] Nguyen, T.H., & Grishman, R.: Relation extraction: Perspective from convolutional neural networks. In:
Proceedings of the 1st workshop on vector space modeling for natural language processing, S. 39–48
(2015, Juni)
Santos, C.N.D., Xiang, B., & Zhou, B.: Classifying relations by ranking with convolutional neural networks.
arXiv preprint arXiv:1504.06580 (2015)

[8]

[9] Chen, Y.: Convolutional neural network for sentence classification (Master’s thesis, University of Waterloo)

(2015)

[10] Kalchbrenner, N., Grefenstette, E., & Blunsom, P.: A convolutional neural network for modelling sentences.

arXiv preprint arXiv:1404.2188 (2014)

[11] Elman, J.L. Distributed representations, simple recurrent networks, and grammatical structure. Machine

learning 7(2), 195–225 (1991)

[12] Zhang, D., & Wang, D.: Relation classification via recurrent neural network. arXiv preprint arXiv:1508.01006

(2015)

Datenintelligenz

567

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u
D
N

/

ich

T
/

l

A
R
T
ich
C
e

P
D

F
/

/

/

/

4
3
5
5
2
2
0
3
8
4
1
2
D
N
_
A
_
0
0
1
4
7
P
D

/

T

.

ich

F

B
j
G
u
e
S
T

T

Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3

Bi-GRU Relation Extraction Model Based on Keywords Attention

[13] Zhang, S., Zheng, D., Hu, X., & Yang, M.: Bidirectional long short-term memory networks for relation
classification. In: Proceedings of the 29th Pacific Asia conference on language, information and
Berechnung, S. 73–78 (2015, Oktober)

[14] Sundermeyer, M., Schlüter, R., & Ney, H.: LSTM neural networks for language modeling. In: Thirteenth

annual conference of the international speech communication association (2012)

[15] Hochreiter, S., & Schmidhuber, J.: Long short-term memory. Neural computation 9(8), 1735–1780 (1997)
[16] Chung, J., Gulcehre, C., Cho, K., & Bengio, Y.: Empirical evaluation of gated recurrent neural networks on

sequence modeling. arXiv preprint arXiv:1412.3555 (2014)

[17] Wang, H., Qin, K., Zakari, R. Y., Lu, G., & Yin, J.: Deep neural network-based relation extraction: ein

overview. Neural Computing and Applications, 1–21 (2022)

[18] Xu, Y., Mou, L., Li, G., Chen, Y., Peng, H., & Jin, Z.: Classifying relations via long short term memory
networks along shortest dependency paths. In: Verfahren der 2015 conference on empirical methods
in natural language processing, S. 1785–1794 (2015, September)

[19] Zhang, Y., Zhong, V., Chen, D., Angeli, G., & Manning, C.D.: Position-aware attention and supervised data

improve slot filling. In: Conference on Empirical Methods in Natural Language Processing (2017)

[20] Zhang, Y., Qi, P., & Manning, C.D.: Graph convolution over pruned dependency trees improves relation

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

extraction. arXiv preprint arXiv:1809.10185 (2018)

[21] Liu, T., Zhang, X., Zhou, W., & Jia, W.: Neural relation extraction via inner-sentence noise reduction and

transfer learning. arXiv preprint arXiv:1808.06738 (2018)

[22] Lee, J., Seo, S., & Choi, Y.S.: Semantic relation classification via bidirectional lstm networks with entity-aware

attention using latent entity typing. Symmetry 11(6), 785 (2019)

[23] Wang, H., Qin, K., Lu, G., Luo, G., & Liu, G.: Direction-sensitive relation extraction using bi-sdp attention

Modell. Knowledge-Based Systems 198, 105928 (2020)

[24] Yu, B., Zhang, Z., Liu, T., Wang, B., Li, S., & Li, Q.: Beyond Word Attention: Using Segment Attention in

Neural Relation Extraction. In: IJCAI, S. 5401–5407 (2019, August)

[25] Aydar, M., Bozal, O., & Ozbay, F.: Neural relation extraction: a survey. arXiv e-prints, arXiv-2007 (2020)
[26] Socher, R., Huval, B., Manning, C.D., & Ng, A.Y.: Semantic compositionality through recursive matrix-vector
Räume. In: Verfahren der 2012 joint conference on empirical methods in natural language processing
and computational natural language learning, S. 1201–1211 (2012, Juli)

[27] Zeng, D., Liu, K., Chen, Y., & Zhao, J.: Distant supervision for relation extraction via piecewise convolutional
neural networks. In: Proceedings of the 2015 conference on empirical methods in natural language
Verarbeitung, S. 1753–1762 (2015, September)

[28] Zhang, D., & Wang, D.: Relation classification via recurrent neural network. arXiv preprint arXiv:1508.01006

(2015)

[29] Xu, Y., Jia, R., Mou, L., Li, G., Chen, Y., Lu, Y., & Jin, Z.: Improved relation classification by deep recurrent

neural networks with data augmentation. arXiv preprint arXiv:1601.03651 (2016)

[30] Zhang, S., Zheng, D., Hu, X., & Yang, M.: Bidirectional long short-term memory networks for relation
classification. In: Proceedings of the 29th Pacific Asia conference on language, information and
Berechnung, S. 73–78 (2015, Oktober)

[31] Zhou, P., Shi, W., Tian, J., Qi, Z., Li, B., Hao, H., & Xu, B.: Attention-based bidirectional long short-term
memory networks for relation classification. In: Proceedings of the 54th annual meeting of the association
for computational linguistics (Volumen 2: Short papers), S. 207–212 (2016, August)

568

Datenintelligenz

e
D
u
D
N

/

ich

T
/

l

A
R
T
ich
C
e

P
D

F
/

/

/

/

4
3
5
5
2
2
0
3
8
4
1
2
D
N
_
A
_
0
0
1
4
7
P
D

/

T

.

ich

F

B
j
G
u
e
S
T

T

Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3

Bi-GRU Relation Extraction Model Based on Keywords Attention

[32] Xiao, M., & Liu, C.: Semantic relation classification via hierarchical recurrent neural network with attention.
In: Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics:
Technical Papers, S. 1254–1263 (2016, Dezember)

[33] Qin, P., Xu, W., & Guo, J.: Designing an adaptive attention mechanism for relation classification. In: 2017

International Joint Conference on Neural Networks (IJCNN), S. 4356–4362 (2017, Mai). IEEE

[34] Zhang, C., Cui, C., Gao, S., Nie, X., Xu, W., Yang, L., … & Yin, Y.: Multi-gram CNN-based self-attention model

for relation classification. IEEE Access 7, 5343–5357 (2018)

[35] Zhang, C., Cui, C., Gao, S., Nie, X., Xu, W., Yang, L., … & Yin, Y.: Multi-gram CNN-based self-attention model

for relation classification. IEEE Access 7, 5343–5357 (2018)

[36] Wei, Z., Su, J., Wang, Y., Tian, Y., & Chang, Y.: A novel cascade binary tagging framework for relational triple

extraction. arXiv preprint arXiv:1909.03227 (2019)

[37] Mintz, M., Bills, S., Snow, R., & Jurafsky, D.: Distant supervision for relation extraction without labeled data.
In: Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint
Conference on Natural Language Processing of the AFNLP, S. 1003–1011 (2009, August)

[38] Er, Y., Li, Z., Yang, Q., Chen, Z., Liu, A., Zhao, L., & Zhou, X.: End-to-end relation extraction based on

bootstrapped multi-level distant supervision. World Wide Web 23(5), 2933–2956 (2020)

[39] Han, X., Liu, Z., & Sun, M.: Neural knowledge acquisition via mutual attention between knowledge graph

and text. In: Proceedings of the AAAI Conference on Artificial Intelligence 32(1) (2018, April)

[40] Wang, G., Zhang, W., Wang, R., Zhou, Y., Chen, X., Zhang, W., … & Chen, H.: Label-free distant supervision
for relation extraction via knowledge graph embedding. In: Verfahren der 2018 conference on empirical
methods in natural language processing, S. 2246–2255 (2018)

[41] Mikolov, T., Chen, K., Corrado, G., & Dean, J.: Efficient estimation of word representations in vector space.

arXiv preprint arXiv:1301.3781 (2013)

[42] Pennington, J., Socher, R., & Manning, C.D.: Glove: Global vectors for word representation. In Proceedings
of the 2014 conference on empirical methods in natural language processing (EMNLP), S. 1532–1543
(2014, Oktober)

[43] Sarzynska-Wawer, J., Wawer, A., Pawlak, A., Szymanowska, J., Stefaniak, ICH., Jarkiewicz, M., & Okruszek, L.:
Detecting formal thought disorder by deep contextualized word representations. Psychiatry Research 304,
114135 (2021)

[44] Rink, B., & Harabagiu, S.: Utd: Classifying semantic relations by combining lexical and semantic resources.
In: Proceedings of the 5th international workshop on semantic evaluation, S. 256–259 (2010, Juli)
[45] Socher, R., Huval, B., Manning, C.D., & Ng, A.Y.: Semantic compositionality through recursive matrix-vector
Räume. In: Verfahren der 2012 joint conference on empirical methods in natural language processing
and computational natural language learning, S. 1201–1211 (2012, Juli)

[46] Liu, Y., Wei, F., Li, S., Ji, H., Zhou, M., & Wang, H.: A dependency-based neural network for relation

classification. arXiv preprint arXiv:1507.04646 (2015)

[47] Bilan, ICH., & Roth, B.: Position-aware self-attention with relative positional encodings for slot filling. arXiv

preprint arXiv:1807.03052 (2018)

[48] Wang, H., Bräunen, M., Yu, M., Chang, S., Wang, D., Xu, K., … & Potdar, S.: Extracting multiple-relations in

one-pass with pre-trained transformers. arXiv preprint arXiv:1902.01030 (2019)

Datenintelligenz

569

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u
D
N

/

ich

T
/

l

A
R
T
ich
C
e

P
D

F
/

/

/

/

4
3
5
5
2
2
0
3
8
4
1
2
D
N
_
A
_
0
0
1
4
7
P
D

.

T

/

ich

F

B
j
G
u
e
S
T

T

Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3

Bi-GRU Relation Extraction Model Based on Keywords Attention

BIOGRAPHIE DES AUTORS

Yuanyuan Zhang (1979–), männlich, Ph.D. graduated from Wuhan University,
associate professor and senior engineer of Technical Training Center of State
Grid Hubei Electric Power Co., Ltd. Research direction: intelligent substation
Technologie, intelligent power grid operation and inspection technology,
Email: 16823650@qq.com, ORCID: 0000-0002-5353-2989

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

Yu Chen (1996–), männlich, graduate student of Hubei University of Technology,
research direction: Artificial Intelligent, NLP, Email: 1148848330@qq.com,
ORCID: 0000-0001-7316-3570

e
D
u
D
N

/

ich

T
/

l

A
R
T
ich
C
e

P
D

F
/

/

/

/

4
3
5
5
2
2
0
3
8
4
1
2
D
N
_
A
_
0
0
1
4
7
P
D

.

/

T

ich

Shengkang Yu (1993–), männlich, master graduated from Huazhong University
of Science and Technology, lecturer and intermediate engineer of Technical
Training Center of State Grid Hubei Electric Power Co.,Ltd. Research direction:
fault diagnosis of electrical equipment, Email: 120520338@qq.com., ORCID:
0000-0001-6374-3395

F

B
j
G
u
e
S
T

T

Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3

570

Datenintelligenz

Bi-GRU Relation Extraction Model Based on Keywords Attention

Xiaoqin Gu (1973–), weiblich, master graduated from Hubei University,
lecturer of Technical Training Center of State Grid Hubei Electric Power Co.,
Ltd., Research direction: power grid operation
Technologie, Email:
1564785699@qq.com. ORCID: 0000-0001-6308-8474

Mengqiong Song (1991–), weiblich, master graduated from Wuhan University,
intermediate engineer of Technical Training Center of State Grid Hubei
Electric Power Co., Ltd. Research direction: power grid operation technology,
Email: 297365728@qq.com, ORCID: 0000-0002-2816-5670

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u
D
N

/

ich

T
/

l

A
R
T
ich
C
e

P
D

F
/

/

/

/

4
3
5
5
2
2
0
3
8
4
1
2
D
N
_
A
_
0
0
1
4
7
P
D

.

/

T

ich

Yu Peng, weiblich, graduated from Wuhan University. Research direction: grid
power electronics, Email: 1039079148@qq.com, ORCID: 0000-0002-5353-
2989

F

B
j
G
u
e
S
T

T

Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3

Datenintelligenz

571

Bi-GRU Relation Extraction Model Based on Keywords Attention

Jianxia Chen is an associate professor in School of Computer Science at
Hubei University of Technology. She obtained her MS at Huazhong University
of Science & Technology in China. She has worked as a research fellow on
the CCF in China and ACM in USA. Her particular research interests are in
knowledge graph and recommendation systems.
ORCID: 0000-0001-6662-1895

Qi Liu, weiblich, graduate student of Hubei University of Technology, Forschung
Richtung: Artificial Intelligent, NLP, Email: 260129443@qq.com., ORCID:
0000-0003-1066-898X

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u
D
N

/

ich

T
/

l

A
R
T
ich
C
e

P
D

F
/

/

/

/

4
3
5
5
2
2
0
3
8
4
1
2
D
N
_
A
_
0
0
1
4
7
P
D

.

T

/

ich

F

B
j
G
u
e
S
T

T

Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3

572

DatenintelligenzBild FORSCHUNGSPAPIER
Bild FORSCHUNGSPAPIER
Bild FORSCHUNGSPAPIER
Bild FORSCHUNGSPAPIER
Bild FORSCHUNGSPAPIER
Bild FORSCHUNGSPAPIER
Bild FORSCHUNGSPAPIER
Bild FORSCHUNGSPAPIER
Bild FORSCHUNGSPAPIER
Bild FORSCHUNGSPAPIER
Bild FORSCHUNGSPAPIER
Bild FORSCHUNGSPAPIER
Bild FORSCHUNGSPAPIER
Bild FORSCHUNGSPAPIER

PDF Herunterladen