PAPEL DE DATOS
Identifying User Profile by Incorporating Self-Attention
Mechanism based on CSDN Data Set
Junru Lu1, Le Chen1, Kongming Meng1,2, Fengyi Wang3, Jun Xiang1, Nuo Chen1,
Xu Han4 & Binyang Li1†
1School of Information Science and Technology, University of International Relations, Beijing 100091, Porcelana
2Deep Brain Co. Limitado., Shanghai 201200, Porcelana
3University of Chinese Academy of Sciences, Beijing 100049, Porcelana
4College of Information Engineering, Capital Normal University, Beijing 100048, Porcelana
Palabras clave: User profile; Convolutional neural network (CNN); Self-attention; Keyword extraction
Citación: j. Lu, l. Chen, k. Meng, F. Wang, j. Xiang, norte. Chen, X. Han, & B. li. Identifying user profile by incorporating self-
attention mechanism based on CSDN data set. Data Intelligence 1(2019), 160-175. doi: 10.1162/dint_a_00009
Recibió: Agosto 27, 2018; Revised: Noviembre 30, 2018; Aceptado: December 6, 2018
ABSTRACTO
With the popularity of social media, there has been an increasing interest in user profiling and its
applications nowadays. This paper presents our system named UIR-SIST for User Profiling Technology
Evaluation Campaign in SMP CUP 2017. UIR-SIST aims to complete three tasks, including keywords
extraction from blogs, user interests labeling and user growth value prediction. Para tal fin, we first extract
keywords from a user’s blog, including the blog itself, blogs on the same topic and other blogs published by
the same user. Then a unified neural network model is constructed based on a convolutional neural network
(CNN) for user interests tagging. Finalmente, we adopt a stacking model for predicting user growth value. Nosotros
eventually receive the sixth place with evaluation scores of 0.563, 0.378 y 0.751 on the three tasks,
respectivamente.
1. INTRODUCCIÓN
Social media have recently become an important platform that enables its users to communicate and
spread information. User-generated content (UGC) has been used for a wide range of applications, incluido
user profiling. The Chinese Software Developer Network (CSDN) is one of the biggest platforms of software
† Corresponding author: Binyang Li (Correo electrónico: byli@uir.edu.cn; ORCID: 0000-0001-9013-1386).
© 2019 Chinese Academy of Sciences Published under a Creative Commons Attribution 4.0 Internacional (CC POR 4.0)
licencia
yo
D
oh
w
norte
oh
a
d
mi
d
F
r
oh
metro
h
t
t
pag
:
/
/
d
i
r
mi
C
t
.
metro
i
t
.
mi
d
tu
d
norte
/
i
t
/
yo
a
r
t
i
C
mi
–
pag
d
F
/
/
/
/
1
2
1
6
0
1
4
7
6
7
0
0
d
norte
_
a
_
0
0
0
0
9
pag
d
/
t
.
i
F
b
y
gramo
tu
mi
s
t
t
oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3
Identifying User Profi le by Incorporating Self-Attention Mechanism based on CSDN Data Set
developers in China to share technical information and engineering experiences. Analyzing UGC on the
CSDN can uncover users’ interests in the software development process, such as their past interests and
current focus, even if their user profiles are incomplete or even missing. Apart from the UGC, user behavior
data also contain useful information for user profiling, such as “following,” “replying,” and “sending private
messages,” through which the friendship network is constructed to indicate user gender [1,2,3], edad [4],
political polarity [5, 6] or profession [7].
In SMP CUP 2017 [8], the competition is structured around three tasks based on CSDN blogs: (1)
keywords extraction from blogs, (2) user interests labeling and (3) user growth value prediction. Our team
from School of Information Science and Technology, University of International Relations participated in
all the tasks in User Profiling Technology Evaluation Champaign. This paper describes the framework of our
system UIR-SIST for the competition. We first extract keywords from a user’s blog, including the blog itself,
blogs on the same topic, and other blogs published by the same user. Then a unified neural network model
is constructed with self-attention mechanism for task 2. The model is based on multi-scale convolutional
neural networks with the aim to capture both local and global information for user profiling. Finalmente, nosotros
adopt a stacking model for predicting user growth value. According to SMP CUP 2017’s metrics, our model
achieved scores of 0.563, 0.378 y 0.751 on the three tasks, respectivamente.
This paper is organized as follows. Sección 2 introduces User Profiling Technology Evaluation Campaign
in details. Sección 3 describes the framework of our system. We present the evaluation results in Section
4. Finalmente, Sección 5 concludes the paper.
2. EVALUATION OVERVIEW
2.1 Data Set
The data set used in SMP CUP 2017 is provided by CSDN, which is one of the largest information
technology communities in China. The CSDN data set consists of all user generated content and the
behavior data from 157,427 users during 2015, which can be further divided into three parts:
1) 1,000,000 pieces of user blogs, involving blog ID, blog title and the corresponding content;
2)
Six types of user behavior data, including posting, browsing, commenting, voting up, voting down
and adding favorites, and the corresponding date and time information;
Relationship between users, which refers to the records of following and sending private messages.
3)
More details about the size and type of the CSDN data set are shown in Table 1.
https://github.com/LuJunru/SMPCUP2017_ELP
Data Intelligence
161
yo
D
oh
w
norte
oh
a
d
mi
d
F
r
oh
metro
h
t
t
pag
:
/
/
d
i
r
mi
C
t
.
metro
i
t
.
mi
d
tu
d
norte
/
i
t
/
yo
a
r
t
i
C
mi
–
pag
d
F
/
/
/
/
1
2
1
6
0
1
4
7
6
7
0
0
d
norte
_
a
_
0
0
0
0
9
pag
d
/
t
.
i
F
b
y
gramo
tu
mi
s
t
t
oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3
Identifying User Profi le by Incorporating Self-Attention Mechanism based on CSDN Data Set
Mesa 1. Statistics of the evaluation data set.
Attribute
Contenido
Size
Format
Blogs
Comportamiento
Correo
Users’ blogs
Record of posting blogs
1,000,000 D0802938/Title/Content
1,000,000 U0024827/D0874760/2015-02-05
Browse
Record of browsing blogs
3,536,444 U0143891/D0122539/20150919
18:05:49.0
Comentario
Record of commenting on blogs
182,273 U0075737/D0383611/2015-10-30
09:48:07
Vote up
Vote down
Add favorites
Relationships Follow
Send private
messages
Record of clicking a “like” button
Record of clicking a “dislike”
button
Record of adding blogs to a user’s
favoriates list
Record of following relationships
Record of sending private
messages
Mesa 2 illustrates an example from the given data set.
11:18:32.0
95,668 U0111639/D0627490/2015-02-21
9,326 U0019111/D0582423/2015-11-23
10,4723 U0014911/D0552113/2015-06-07
07:05:05
667,037 U0124114/U0020107
46, 572 U0079109/U0055181/2015-12-24
yo
D
oh
w
norte
oh
a
d
mi
d
F
r
oh
metro
h
t
t
pag
:
/
/
d
i
r
mi
C
t
.
metro
i
t
.
Mesa 2. Sample of CSDN data set.
Attribute
Data sample
User ID
Blog ID
Blog content
Palabras clave
Interest tags
Correo
Browse
Comentario
Vote up
Vote down
Send private messages
Add favorites
Follow
Growth value
2.2 Tasks
U00296783
D00034623
Title and content.
Keyword1: TextRank; Keyword2: PageRank; Keyword3: Summary
Tag1: Big data; Tag2: Data mining; Tag3: Machine learning
U00296783/D00034623/20160408 12:35:49
D09983742/20160410 08:30:40
D09983742/20160410 08:49:02
D00234899/20160410 09:40:24
D00098183/20160501 15:11:00
U00296783/U02748273/20160501 15:30:36
D00234899/20160410 09:40:44
U00296783/U02666623/20161119 10:30:44
0.0367
Tarea 1: To extract three keywords from each document that can well represent the topic or the main idea
del documento.
Tarea 2: To generate three labels to describe a user’s interests, where the labels are chosen from a given
candidate set (42 en total).
162
Data Intelligence
mi
d
tu
d
norte
/
i
t
/
yo
a
r
t
i
C
mi
–
pag
d
F
/
/
/
/
1
2
1
6
0
1
4
7
6
7
0
0
d
norte
_
a
_
0
0
0
0
9
pag
d
.
/
t
i
F
b
y
gramo
tu
mi
s
t
t
oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3
Identifying User Profi le by Incorporating Self-Attention Mechanism based on CSDN Data Set
Tarea 3: To predict each user’s growth value of the next six months according to his/her behavior of the
past year, including the texts, the relationships and the interactions with other users. The growth
value needs to be scaled into [0, 1], dónde 0 presents user drop-out.
2.3 Métrica
To assess the system effectiveness in completing the above-mentioned tasks, the following evaluation
metrics are designed for each individual task.
Score1 is defined to calculate the overlapping ratio between the extracted keywords and the standard
answers, which can be computed in Equation (1):
Score
1
norte
= ∑ ∩ *
k
k
1
|
i
norte
|
|
k
=
i
i
1
i
|
,
(1)
where N is the size of the validation set or the test set, Ki is the extracted keywords set from document i,
and Ki
* is the standard keywords of document i. Note that it is defined that |Ki| = 3 y |Ki
*| = 5.
yo
D
oh
w
norte
oh
a
d
mi
d
F
r
oh
metro
h
t
t
pag
:
/
/
d
i
r
mi
C
t
.
metro
i
t
.
mi
d
tu
d
norte
/
i
t
/
Score2 denotes the overlapping ratio of model tagging and answers, which can be expressed by
Ecuación (2):
yo
a
r
t
i
C
mi
–
pag
d
F
/
Score
2
norte
= ∑ ∩ *
t
t
1
|
i
i
norte
|
|
=
t
i
1
i
|
,
(2)
where Ti is the automatically generated tag set of user i, and Ti
defined that |Ti| = 3 y |Ti
*| = 3.
* is the standard tags of user i. It is also
Score3 is calculated by relative error between the predicted growth value and the real growth value of
/
/
/
1
2
1
6
0
1
4
7
6
7
0
0
d
norte
_
a
_
0
0
0
0
9
pag
d
.
/
t
i
users, which can be expressed by Equation (3):
Score
3
= -
1
1
norte
norte
∑
=
1
i
⎧
⎪
⎨
⎪⎩
0,
v
i
−
v
*
i
máximo
(
v v
,
i
*
i
)
,
v
i
=
0,
v
*
i
=
0
de lo contrario
,
where vi is the predicted growth value of user i, and vi
* is the real growth value of user i.
The overall score can be computed by Equation (4):
Score
todo
=
Score
1
+
Score
2
+
Score
3
F
b
y
gramo
tu
mi
s
t
t
oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3
(3)
(4)
3. SYSTEM OVERVIEW
The overall architecture of UIR-SIST is described in Figure 1. UIR-SIST system is comprised of four
modules:
Data Intelligence
163
Identifying User Profi le by Incorporating Self-Attention Mechanism based on CSDN Data Set
1)
2)
3)
4)
Preprocessing module: To read all blogs of training set and test set. It performs word segmentation,
part-of-speech (POS) tagging, named entity recognition and semantic role labeling;
Keyword extraction module: To extract three keywords to represent the main idea of a blog, cual
can be captured from three aspects to generate the candidate keywords set, including the blog
contenido, other blogs published by the same user, and the blogs on the same topic, como se muestra en el
green part;
User interests tagging module: To construct a neural network combined with user content
embedding and keyword and user tag embedding for user interests tagging, as shown in the red part;
User growth value prediction module: To incorporate users’ interaction information and the behavior
features into a supervised learning model for growth value prediction, as shown in the blue part.
yo
D
oh
w
norte
oh
a
d
mi
d
F
r
oh
metro
h
t
t
pag
:
/
/
d
i
r
mi
C
t
.
metro
i
t
.
mi
d
tu
d
norte
/
i
t
/
Cifra 1. System architecture.
yo
a
r
t
i
C
mi
–
pag
d
F
/
/
/
/
1
2
1
6
0
1
4
7
6
7
0
0
d
norte
_
a
_
0
0
0
0
9
pag
d
.
t
/
i
3.1 Keywords Extraction
The objective of task 1 is to extract three keywords from each blog that can represent the main idea of
the blog. In our opinion, the main idea can be extracted from the following three aspects, the blog itself,
other blogs published by the same user, and the blogs on the same topic. Based on this assumption, nosotros
adopt three different models that can capture each aspect to generate a candidate keywords set, incluido
tf-idf, TextRank and LDA, which are proved very effective in the relevant tasks. Then three keywords are
extracted from the candidate set by using different rules.
We first adopt the classic tf-idf term weighting scheme to reflect the content of the blog itself. Entonces
we rank the keywords based on the tf-idf score, and select the top 100 keywords to form the candidate
keyword set.
F
b
y
gramo
tu
mi
s
t
t
oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3
164
Data Intelligence
Identifying User Profi le by Incorporating Self-Attention Mechanism based on CSDN Data Set
Regarding the blogs on the same topic, we adopt TextRank approach [9] to cluster these blogs together.
Mientras tanto, all the keywords will be weighed during this process. We finally select the top 300 keywords.
Además, we utilize topic information to extract the keywords. Desde 42 categories of tags are given in
tarea 2, we assume that these 42 topics are extracted from all the blogs. Por lo tanto, we use Latent Dirichlet
Allocation (LDA) modelo [10] to extract top 100 keywords for each category from 1,000,000 blogs, y por lo tanto
obtain the interspecific distribution information of these 4,200 subject keywords.
En resumen, we consider three aspects in order to reflect the blog content and obtain three independent
candidate keywords sets, which are extracted through tf-idf model, TextRank model and LDA model. Después
eso, we only save the intersection data set. In our training set of task 1, acerca de 5,000 keywords are provided,
which are collected after extraction and deduplication.
A drawback of the classic tf-idf model is that it simply presupposes that the rarer a word is in corpus,
the more important it is, and the greater its contribution is to the main idea of the text. Sin embargo, cuando
referring to a group of articles, which mainly use the same keywords and describe some similar concepts,
the calculation results will have many errors. This is also the reason why we use tf-idf in the short blog,
while we use the TextRank model in the long blog collection published by the same user.
Además, in order to enhance its cross-topic analysis ability, we borrow the idea of 2016 Big Data &
Computing Intelligence Contest sponsored by China Computer Federation (CCF), and implement the
improvements on the results of traditional tf-idf calculation, and obtain the result of S-TFIDF(w) mediante el uso
Ecuación (5):
)
S TFIDF w TFIDF w
−
=
(
(
)
*
⎛
⎜
⎝
1
C
w
−
⎞
1
,
⎠
⎟
42
(5)
where Cw is the frequency of word w appearing in 42 categories.
3.2 User Interests Tagging
The objective of this task is to tag a user’s interests with three labels from 42 given ones. We model this
task with neural networks, and the model structure is shown in Figure 2. Each blog is represented by a blog
incrustar [11] through convolution and max-pooling layers. Then we obtain a user’s content embedding
from weighted sum of all of his or her blog embeddings. The weighted value of each blog embedding is
counted by self-attention mechanism. Content embedding and keyword embedding are concatenated as
user embedding, and finally fed to the output layer.
https://github.com/coderSkyChen/2016CCF_BDCI_Sougou
Data Intelligence
165
yo
D
oh
w
norte
oh
a
d
mi
d
F
r
oh
metro
h
t
t
pag
:
/
/
d
i
r
mi
C
t
.
metro
i
t
.
mi
d
tu
d
norte
/
i
t
/
yo
a
r
t
i
C
mi
–
pag
d
F
/
/
/
/
1
2
1
6
0
1
4
7
6
7
0
0
d
norte
_
a
_
0
0
0
0
9
pag
d
t
.
/
i
F
b
y
gramo
tu
mi
s
t
t
oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3
Identifying User Profi le by Incorporating Self-Attention Mechanism based on CSDN Data Set
yo
D
oh
w
norte
oh
a
d
mi
d
F
r
oh
metro
h
t
t
pag
:
/
/
d
i
r
mi
C
t
.
metro
i
t
.
mi
d
tu
d
norte
/
i
t
/
Cifra 2. Framework of CNNs model based on weighted-blog-embeddings in task 2.
In our system, a convolutional neural network (CNN) model is constructed for blog representation instead
of a recurrent neural network (RNN), since more global information will be captured for indicating the user
interests and the time efficiency will also be enhanced. It is widely acknowledged that a multi-scale
convolutional neural network [12] has been implemented due to its outstanding achievement on computer
visión [13], and TextCNNs designed by arraying word embedding vertically has also shown quite high
effectiveness for natural language processing (NLP) tareas [14].
yo
a
r
t
i
C
mi
–
pag
d
F
/
/
/
/
1
2
1
6
0
1
4
7
6
7
0
0
d
norte
_
a
_
0
0
0
0
9
pag
d
/
.
t
i
In our CNN model, we treat a blog as a sequence of words x = [x1, x2, … , x1] where each one is
represented by its word embedding vector, and returns a feature matrix S of the blog. The narrow convolution
layer attached after the matrix is based on a kernel W ∈ Rkd of width k, a nonlinear function f and a bias
variable b as described by Equation (6):
(
f W
X
=
h
i
+ −
i j k
:
1
)
,
+
b
(6)
where xi:j refers specifically to the concatenation of the sequence of words’ vectors from position i to
position j. en esta tarea, we use several kernel sizes to obtain multiple local contextual feature maps in the
convolution layer, and then apply the max-overtime pooling [15] to extract some of the most important
características.
F
b
y
gramo
tu
mi
s
t
t
oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3
166
Data Intelligence
Identifying User Profi le by Incorporating Self-Attention Mechanism based on CSDN Data Set
The output of that is the low-dimensional dense and quantified representation of each single blog. Después
eso, each user’s relevant blogs are computable. We simply average their blogs’ vectors to obtain the content
embedding c(tu) for an individual user:
( )
c u
1 t
= ∑
t
=
1
i
s
i
,
(7)
where T is the total number of a user’s related blogs.
Sin embargo, different sources of blogs imply the extent of a user’s interest in different topics. Por ejemplo,
a blog posted by a user may be generated from an article written by himself, reposted by other users, o
shared by users from another platform. It is natural that we may pay attention to these blogs in varying
degrees when we infer this user’s interests. De este modo, a self-attention mechanism is introduced, cual
automatically assigns different weights to the value of each user’s blog after training. The user context
representation is given by weighted summation of all blogs’ vectors:
=
a
)
,
exp e
(
i
t
∑
mi
j
=
1
j
=
mi
i
t
v tanh Ws Uh
i
(
i
+
( )
c u
t
= ∑ a
h
,
i
i
=
1
i
),
(8)
(9)
(10)
where ai is the weight of the i-th blog, si is the one-hot source representation vector of the blog, v ∈ Rn,
W ∈ Rn’ × m, U ∈ Rn’ × n, si ∈ Rm, hi ∈ Rn, and m is the number of all source platforms.
When we finish a user’s context representation, the keyword matrix of all blogs’ keywords extracted by
our model in task 1 will be concatenated. The final features are the output of above whole feature
engineering. Afterwards, an ANN layer trains the user embeddings from the training set and predicts
probability distribution of users’ interests among 42 tags in validation and test set according to their
embeddings.
3.3 User Growth Value Prediction
According to the description of task 3, the growth value can be estimated as the degree of activeness.
Por lo tanto, our basic idea is to incorporate a users’ interaction information and his or her behavior statistical
features into a supervised learning model. The procedure of task 3 is demonstrated by Figure 3.
Data Intelligence
167
yo
D
oh
w
norte
oh
a
d
mi
d
F
r
oh
metro
h
t
t
pag
:
/
/
d
i
r
mi
C
t
.
metro
i
t
.
mi
d
tu
d
norte
/
i
t
/
yo
a
r
t
i
C
mi
–
pag
d
F
/
/
/
/
1
2
1
6
0
1
4
7
6
7
0
0
d
norte
_
a
_
0
0
0
0
9
pag
d
t
.
/
i
F
b
y
gramo
tu
mi
s
t
t
oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3
Identifying User Profi le by Incorporating Self-Attention Mechanism based on CSDN Data Set
Behavior Statistics
Feature Selection
Passive Aggressive
Gradient Boosting
Feature Extraction
NuSVR
Final Result
Cifra 3. Framework of the stacking model in task 3.
On the whole, we use a stacking framework [16] to enhance the accuracy of final prediction. After the
basic behavior statistics analysis, the original features are selected as the inputs incorporated into the
stacking model. Entonces, the stacking model is divided into two layers, the base layer and the stacking layer.
In the base layer, we choose Passive Aggressive Regressor [17] and Gradient Boosting Regressor [18, 19]
as the group of basic regressors due to their excellent performance. In the stacking layer, we still use the
support vector machines (SVM) modelo, especially, the NuSVR model, which can control its error rate.
Finalmente, we obtain the final results of user growth value.
3.3.1 Original Feature Selection
Cifra 4 illustrates an example of the daily statistics of user behaviors, including posting, browsing,
commenting, voting up, voting down, adding favorites, following, and sending private messages. To predict
the user growth value, it is noted that the dynamic changes of behaviors along the time line are more useful.
To avoid the sparse data problem, we adopt the monthly statistics of user behaviors rather than daily
Estadísticas.
Cifra 4. Example of daily statistics of user behaviors. Nota: “Add” refers to “add favoriates,” and “send” refers to
“send private messages.”
168
Data Intelligence
yo
D
oh
w
norte
oh
a
d
mi
d
F
r
oh
metro
h
t
t
pag
:
/
/
d
i
r
mi
C
t
.
metro
i
t
.
mi
d
tu
d
norte
/
i
t
/
yo
a
r
t
i
C
mi
–
pag
d
F
/
/
/
/
1
2
1
6
0
1
4
7
6
7
0
0
d
norte
_
a
_
0
0
0
0
9
pag
d
/
.
t
i
F
b
y
gramo
tu
mi
s
t
t
oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3
Identifying User Profi le by Incorporating Self-Attention Mechanism based on CSDN Data Set
Then we use correlation analysis to exclude the “vote down” behavior because of its negative contribution
to model prediction. Después, through feature selection, we use the average, log calculation and growth
rate of the original data to obtain features for the stacking model.
(
LOG d
)
=
log d
(
+
1),
(
GR d
t
)
=
d
1
+ −
t
+
d
t
t
d
1
,
(11)
(12)
where LOG(d) represents the calculation results of data d after adjustment, and GR(dt) representa el
calculation results of growth value from data dt in month t to data dt+1 in month t+1.
3.3.2 PAR/GDR-NuSVR-Stacking Model (PGNS)
Once we have obtained monthly statistics and derivative features as described above, the combination
of them will be sent as inputs into Passive Aggressive Regressor and Gradient Boosting Regressor
independientemente. By averaging the predictions of those two base models, a new feature will be created and
input into the stacking model NuSVR. Because of the inherent randomness of base models, we adopt a
self-check mechanism of 10-fold cross validation.
If the trained model obtains a score higher than the threshold S* under given scoring rules, we will enter
the corresponding features of validation set or test set into the model for a prediction value, which will be
saved into a candidate set. On the contrary, if the trained model obtains a 10-fold cross validation score
that is lower than S*, the model will be discarded and the program will return to the training session shown
in the dotted box for a new round of training.
In order to reduce the errors of a single round of training, we set at least R* rounds for training and add
all predictions that obtain higher scores than S* to the candidate set. According to our experience, the ratio
of the size of a candidate set to R* is about 0.45. When all rounds of trainings are completed, all predictions
in the candidate set will be calculated to generate an average prediction as the final results.
4. EVALUATION
In our model, we first adopt Jieba toolkit for Chinese word segmentation, and then train a word
embedding with the dimensions of 300 [11].
Mesa 3 shows the comparison results of our proposed approach for task 1. It is observed that the best
results are achieved when data of all the three aspects are used for capturing the main ideas of blogs.
https://github.com/fxsjy/jieba
Data Intelligence
169
yo
D
oh
w
norte
oh
a
d
mi
d
F
r
oh
metro
h
t
t
pag
:
/
/
d
i
r
mi
C
t
.
metro
i
t
.
mi
d
tu
d
norte
/
i
t
/
yo
a
r
t
i
C
mi
–
pag
d
F
/
/
/
/
1
2
1
6
0
1
4
7
6
7
0
0
d
norte
_
a
_
0
0
0
0
9
pag
d
.
t
/
i
F
b
y
gramo
tu
mi
s
t
t
oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3
Identifying User Profi le by Incorporating Self-Attention Mechanism based on CSDN Data Set
Mesa 3. Comparison on task 1 with different aspects.
Acercarse
BI: Blog itself
ST: Same topic
SU: Same user
BI+ST+SU
Resultados
0.505
0.371
0.436
0.563
Besides, we also test performance of our combined neural network with different embedding inputs.
Note that to obtain the results of individual embedding, we train a new CNN model for blog embedding,
and compute the similarity between blog content and keywords in the embedding representation. El
experimental results are summarized in Table 4. It is observed that the embedding of blog content proves
more effective than that of keywords, while they together achieve the best run.
Mesa 4. Comparison of different aspects on task 2.
Acercarse
Blog embedding
Keywords embedding
Blog + keywords embedding
Resultados
0.301
0.245
0.378
Mesa 5 displays the overall performance of our system’s best run on each individual task, which achieved
the sixth place in the competition.
Mesa 5. Performance of UIR-SIST system in SMP CUP 2017.
Training set (10 Fold)
Validation set
Test set
Tarea 1
0.610
0.560
0.563
Tarea 2
0.390
0.390
0.378
Tarea 3
0.765
0.730
0.751
Total
1.765
1.680
1.692
5. CONCLUSIONS AND FUTURE WORK
en este documento, we present our system built for the User Profiling Technology Evaluation Campaign of SMP
CUP 2017. To complete task 1, we propose to extract keywords from three aspects from a user’s blogs,
including the blog itself, blogs on the same topic, and other blogs published by the same user. Then a
unified neural network model with self-attention mechanism is constructed for task 2. The model is based
on multi-scale convolutional neural networks with the aim to capture both local and global information
for user profiles. Finalmente, we adopt a stacking model for predicting user growth value. According to SMP
CUP 2017’s metrics, our model runs achieved the final scores of 0.563, 0.378 y 0.751 on three tasks,
respectivamente.
170
Data Intelligence
yo
D
oh
w
norte
oh
a
d
mi
d
F
r
oh
metro
h
t
t
pag
:
/
/
d
i
r
mi
C
t
.
metro
i
t
.
mi
d
tu
d
norte
/
i
t
/
yo
a
r
t
i
C
mi
–
pag
d
F
/
/
/
/
1
2
1
6
0
1
4
7
6
7
0
0
d
norte
_
a
_
0
0
0
0
9
pag
d
.
t
/
i
F
b
y
gramo
tu
mi
s
t
t
oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3
Identifying User Profi le by Incorporating Self-Attention Mechanism based on CSDN Data Set
Future work includes analysis of the relationships between users and blogs. We only use the users’
behavior in task 2 in the current system, but the time when blogs are published is ignored. We plan to
include network embedding into our model. Además, we will collect more blogs with real time information,
and attempt to incorporate the time information into our weighting schema in those tasks.
CONTRIBUCIONES DE AUTOR
B. li (byli@uir.edu.cn, Autor correspondiente) is the leader of the UIR-SIST system, who drew the whole
framework of the system. j. Lu (lj1230@nyu.edu) was responsible for building the model for keyword
extraction, while L. Chen (lec@boyabigdata.cn) and K. Meng (kmmeng@uir.edu.cn) were responsible for
the model construction of user interests tagging. F. Wang (wangfengyi18@mails.ucas.ac.cn) summarized
the user growth value prediction, while J. Xiang (xiang.j@husky.neu.edu) y N. Chen (nchen@uir.edu.cn)
summarized the evaluation and made error analysis. X. Han (hanxu@cnu.edu.cn) drafted the whole paper.
All authors revised and proofread the paper.
yo
D
oh
w
norte
oh
a
d
mi
d
F
r
oh
metro
h
t
t
pag
:
/
/
d
i
r
mi
C
t
.
metro
i
t
.
mi
d
tu
d
norte
/
i
t
/
yo
a
r
t
i
C
mi
–
pag
d
F
/
/
/
/
1
2
1
6
0
1
4
7
6
7
0
0
d
norte
_
a
_
0
0
0
0
9
pag
d
t
.
/
i
F
b
y
gramo
tu
mi
s
t
t
oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3
ACKNOWLEDGEMENTS
This work is partially supported by the National Natural Science Foundation of China (Grant numbers:
61502115, 61602326, U1636103 and U1536207), and the Fundamental Research Fund for the Central
Universities (Grant numbers: 3262017T12, 3262017T18, 3262018T02 and 3262018T58).
REFERENCIAS
[1] METRO. Ciot, METRO. Sonderegger, & D. Ruths. Gender inference of twitter users in non-English contexts. En: Proceed-
ings of the 2013 Conference on Empirical Methods in Natural Language Processing, 2013, páginas. 1136–1145.
doi: 10.1126/science.328.5974.38.
l. Wendy, & R. Derek. What’s in a name? Using first names as features for gender inference in Twitter.
En: Actas de la 2013 AAAI Spring Symposium: Analyzing Microtext, 2013, páginas. 10-dieciséis. doi: 10.1103/
PhysRevB.76.054113.
[2]
[3] W.. Liu, F.A. Zamal, & D. Ruths. Using social media to infer gender composition of commuter populations.
En: Proceedings of the International Conference on Weblogs and Social Media. Available at: http://www.
ruthsresearch.org/static/publication_files/LiuZamalRuths_WCMCW.pdf.
[4] D. Rao, & D. Yarowsky. Detecting latent user properties in social media. En: Proceedings of the NIPS MLSN
Taller, 2010, páginas. 1–7. doi: 10.1007/s10618-010-0210-x.
[5] METRO. Pennacchiotti, & A.M. Popescu. A machine learning approach to Twitter user classification. En:
Proceedings of the Fifth International Conference on Weblogs and Social Media, 2011, 281–288. doi:
10.1145/2542214.2542215.
[6] M.D. Conover, j. Ratkiewicz, METRO. Francisco, B. Gonçalves, A. Llamas, & F. Menczer. Political polarization
on Twitter. En: Proceedings of the Fifth International Conference on Weblogs and Social Media, 2011, 89–96.
Disponible en: https://journalistsresource.org/wp-content/uploads/2014/10/2847-14211-1-PB.pdf?x12809.
Data Intelligence
171
Identifying User Profi le by Incorporating Self-Attention Mechanism based on CSDN Data Set
[7] C. Tu, z. Liu, & METRO. Sol. PRISM: Profession identification in social media with personal information and
community structure. En: Proceedings of Social Media Processing, 2015, páginas. 15–27. doi: 10.1007/978-981-
10-0080-5_2.
SMP CUP 2017. Disponible en: http://www.cips-smp.org/smp2017/.
[8]
[9] R. Mihalcea, & PAG. Tarau. TextRank: Bringing order into text. En: Proceedings of the 2004 Conference on
Empirical Methods in Natural Language Processing, 2004. Disponible en: http://www.aclweb.org/anthology/
W04-3252.
[10] D.M. Blei, A.Y. Ng, & M.I. Jordán. Latent dirichlet allocation. Journal of Machine Learning Research 3(2003),
993–1022. Disponible en: https://dl.acm.org/citation.cfm?id=944937]&preflayout=flat#source.
[11] t. Mikolov, k. Chen, GRAMO. Corrado, & j. Dean. Efficient estimation of word representations in vector space.
En: Proceedings of Workshop at International Conference on Learning Representations (LCLR). Disponible en:
https://www.researchgate.net/publication/234131319_Efficient_Estimation_of_Word_Representations_in_
Vector_Space.
[12] Y. LeCun, B. Boser, J.S. Denker, D. Henderson, R.E. Howard, W.. Hubbard, & L.D. Jackel. Handwritten digit
recognition with a backpropagation network. En: Proceedings of Advances in Neural Information Processing
Sistemas, 1990, páginas. 396–404. Disponible en: https://dl.acm.org/citation.cfm?id=109279%22.
[13] A. Krizhevsky, I. Sutskever, & GRAMO. Hinton. ImageNet classification with deep convolutional neural networks.
En: Proceedings of Advances in Neural Information Processing Systems, 2012, doi: 10.1145/3065386.
[14] Y. kim. Convolutional neural networks for sentence classification. En: Actas de la 2014 Conferencia
on Empirical Methods in Natural Language Processing, 2014, páginas. 1746–1751. Disponible en: https://arxiv.org/
pdf/1408.5882.pdf.
[15] R. Collobert, j. Weston, l. Bottou, METRO. Karlen, k. Kavukcuoglu, & PAG. Kuksa. Natural language processing
(almost) from scratch. The Journal of Machine Learning Research 12(2011), 2493–2537. doi: 10.1016/
j.chemolab.2011.03.009.
[16] D.H. Wolpert. Original contribution: Stacked generalization. Neural Netw 5(2)(1992), 241–259. doi:
10.1016/S0893-6080(05)80023-1.
[17] k. Crammer, oh. Dekel, j. Keshet, S. Shalev-Shwartz, & Y. Cantante. Online passive-aggressive algorithms.
Journal of Machine Learning Research 7(3)(2006), 551–585. Available at: http://www.jmlr.org/papers/v7/
crammer06a.html.
[18] J.H. Friedman. Greedy function approximation: A gradient boosting machine. Annals of Statistics 29(5)
(2001), 1189–1232. doi: 10.1214/aos/1013203451.
[19] j. Friedman. Stochastic gradient boosting. Computational Statistics & Análisis de los datos 38(4)(2002), 367–378.
doi: /10.1016/S0167-9473(01)00065-2.
172
Data Intelligence
yo
D
oh
w
norte
oh
a
d
mi
d
F
r
oh
metro
h
t
t
pag
:
/
/
d
i
r
mi
C
t
.
metro
i
t
.
mi
d
tu
d
norte
/
i
t
/
yo
a
r
t
i
C
mi
–
pag
d
F
/
/
/
/
1
2
1
6
0
1
4
7
6
7
0
0
d
norte
_
a
_
0
0
0
0
9
pag
d
t
/
.
i
F
b
y
gramo
tu
mi
s
t
t
oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3
Identifying User Profi le by Incorporating Self-Attention Mechanism based on CSDN Data Set
AUTHOR BIOGRAPHY
Junru Lu is currently a Master’s Degree candidate in the Center of Urban
Science and Progress, New York University. He received his Bachelor Degree
from University of International Relations in 2018. His research interests
include natural language processing, text mining and social computing.
Le Chen received his Bachelor Degree from University of International
Relations in 2018. He is now working as a data analyst in Beijing Boya
Bigdata Co. Limitado. His research interests include text mining and social
computing.
yo
D
oh
w
norte
oh
a
d
mi
d
F
r
oh
metro
h
t
t
pag
:
/
/
d
i
r
mi
C
t
.
metro
i
t
.
mi
d
tu
d
norte
/
i
t
/
yo
a
r
t
i
C
mi
–
pag
d
F
/
/
/
/
1
2
1
6
0
1
4
7
6
7
0
0
d
norte
_
a
_
0
0
0
0
9
pag
d
/
.
t
i
Kongming Meng is currently working as a data engineer in the DeepBrain
Compañía. He received his Bachelor Degree from University of International
Relations in 2018. His research interests include data mining and data
análisis.
F
b
y
gramo
tu
mi
s
t
t
oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3
Data Intelligence
173
Identifying User Profi le by Incorporating Self-Attention Mechanism based on CSDN Data Set
Fengyi Wang is currently a master student in the University of Chinese
Academia de Ciencias (CAS). She received her Bachelor Degree from University
of International Relations in 2018. Her research interests include natural
language processing and social network analysis.
Jun Xiang is currently a master student in the program of Computer Systems
Ingeniería, Northeastern University. She received her Bachelor Degree from
University of International Relations in 2018. She has published two papers
in international conferences and Chinese journals during her undergraduate
estudios.
Nuo Chen got her Bachelor Degree from the School of Information Science
and Technology, University of International Relations in 2018. Su investigacion
interest is knowledge graph.
174
Data Intelligence
yo
D
oh
w
norte
oh
a
d
mi
d
F
r
oh
metro
h
t
t
pag
:
/
/
d
i
r
mi
C
t
.
metro
i
t
.
mi
d
tu
d
norte
/
i
t
/
yo
a
r
t
i
C
mi
–
pag
d
F
/
/
/
/
1
2
1
6
0
1
4
7
6
7
0
0
d
norte
_
a
_
0
0
0
0
9
pag
d
.
/
t
i
F
b
y
gramo
tu
mi
s
t
t
oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3
Identifying User Profi le by Incorporating Self-Attention Mechanism based on CSDN Data Set
Xu Han received her PhD Degree in 2011. She is an assistant professor at
the Capital Normal University and her research interests are artificial
intelligence and mobile cloud computing. She has published over 30 investigación
papers in major international journals and conferences.
Binyang Li received his PhD Degree from the Chinese University of Hong
Kong in 2012. He is now working as an associate professor in the School of
Information Science and Technology, University of International Relations.
His research interests include natural language processing, sentiment analysis
and social computing. He has published over 50 research papers in major
international journals and conferences.
yo
D
oh
w
norte
oh
a
d
mi
d
F
r
oh
metro
h
t
t
pag
:
/
/
d
i
r
mi
C
t
.
metro
i
t
.
mi
d
tu
d
norte
/
i
t
/
yo
a
r
t
i
C
mi
–
pag
d
F
/
/
/
/
1
2
1
6
0
1
4
7
6
7
0
0
d
norte
_
a
_
0
0
0
0
9
pag
d
/
t
.
i
F
b
y
gramo
tu
mi
s
t
t
oh
norte
0
8
S
mi
pag
mi
metro
b
mi
r
2
0
2
3
Data Intelligence
175