Unsupervised Abstractive Opinion Summarization

Unsupervised Abstractive Opinion Summarization
by Generating Sentences with Tree-Structured Topic Guidance

Masaru Isonuma1

Junichiro Mori1,2 Danushka Bollegala3

Ichiro Sakata1

1The University of Tokyo, Japón

2 RIKEN, Japón

3 University of Liverpool, Reino Unido

isonuma@ipr-ctr.t.u-tokyo.ac.jp

mori@mi.u-tokyo.ac.jp

danushka@liverpool.ac.uk

isakata@ipr-ctr.t.u-tokyo.ac.jp

Abstracto

This paper presents a novel unsupervised
abstractive summarization method for opin-
ionated texts. While the basic variational
autoencoder-based models assume a unimodal
Gaussian prior for the latent code of sentences,
we alternate it with a recursive Gaussian mix-
tura, where each mixture component corre-
sponds to the latent code of a topic sentence
and is mixed by a tree-structured topic distribu-
ción. By decoding each Gaussian component,
we generate sentences with tree-structured
topic guidance, where the root sentence con-
veys generic content, and the leaf sentences
describe specific topics. Experimental results
demonstrate that the generated topic sentences
are appropriate as a summary of opinionated
textos, which are more informative and cover
more input contents than those generated by
the recent unsupervised summarization model
(Braˇzinskas et al., 2020). Además, nosotros
demonstrate that the variance of latent Gauss-
ians represents the granularity of sentences, un-
alogous to Gaussian word embedding (Vilnis
and McCallum, 2015).

1

Introducción

Summarizing opinionated texts, such as product
reviews and online posts on Web sites, has at-
tracted considerable attention recently along with
the development of e-commerce and social media.
Although extractive approaches are widely used
in document summarization (Erkan and Radev,
2004; Ganesan et al., 2010), they often fail to pro-
vide an overview of the documents, particularly
for opinionated texts (Carenini et al., 2013; Gerani
et al., 2014). Abstractive summarization can over-
come this challenge by paraphrasing and general-
izing an entire document. Although supervised
approaches have seen significant success with the
development of neural architectures (See et al.,
2017; Fabbri et al., 2019), they are limited to
specific domains, p.ej., news articles, where a large

945

number of gold summaries are available. Cómo-
alguna vez, the domain of opinionated texts is diverse;
manually writing gold summaries is therefore
costly.

This lack in gold summaries has motivated prior
work to develop unsupervised abstractive summa-
rization of opinionated texts, Por ejemplo, product
reviews (Chu and Liu, 2019; Braˇzinskas et al.,
2020; Amplayo and Lapata, 2020). While they
generated consensus opinions by condensing in-
put reviews, two key components were absent:
topics and granularity (es decir., the level of detail). Para
instancia, as shown in Figure 1, a gold summary
of a restaurant review provides the overall impres-
sion and details about certain topics, such as food,
ambience, and service. Por eso, a summary typi-
cally comprises diverse topics, some of which are
described in detail, whereas others are mentioned
concisely.

From this investigation, we capture the topic-
tree structure of reviews and generate topic sen-
tenencias, eso es, sentences summarizing specified
temas. In the topic-tree structure, the root sentence
conveys generic content, and the leaf sentences
mention specific topics. From the generated topic
oraciones, we extract sentences with appropriate
topics and levels of granularity as a summary. Re-
garding extractive summarization, capturing top-
circuitos integrados (Titov and McDonald, 2008; Isonuma et al.,
2017; Angelidis and Lapata, 2018) and topic-tree
estructura (Celikyilmaz and Hakkani-Tur, 2010,
2011) is useful for detecting salient sentences. A
the best of our knowledge, this is the first study
to use the topic-tree structure in unsupervised ab-
stractive summarization.

The difficulty of generating sentences with tree-
structured topic guidance lies in controlling the
granularity of topic sentences. Wang et al. (2019)
generated a sentence with designated topic guid-
ance, assuming that the latent code of an input
sentence can be represented by a Gaussian mixture

Transacciones de la Asociación de Lingüística Computacional, volumen. 9, páginas. 945–961, 2021. https://doi.org/10.1162/tacl a 00406
Editor de acciones: Asli Celikyilmaz. Lote de envío: 3/2021; Lote de revisión: 4/2021; Publicado 9/2021.
C(cid:2) 2021 Asociación de Lingüística Computacional. Distribuido bajo CC-BY 4.0 licencia.

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

mi
d
tu

/
t

a
C
yo
/

yo

a
r
t
i
C
mi

pag
d

F
/

d
oh

i
/

.

1
0
1
1
6
2

/
t

yo

a
C
_
a
_
0
0
4
0
6
1
9
6
2
4
6
4

/

/
t

yo

a
C
_
a
_
0
0
4
0
6
pag
d

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Cifra 1: Outline of our approach. (1) The latent distribution of review sentences is represented as a recursive
GMM and trained in an autoencoding manner. Entonces, (2) the topic sentences are inferred by decoding each
Gaussian component. An example of a restaurant review and its corresponding gold summary are displayed.

modelo (GMM), where each Gaussian component
corresponds to the latent code of a topic sentence.
While they successfully generated a sentence relat-
ing to a designated topic by decoding each mixture
component, modelling the sentence granularity in
a latent space to generate topic sentences with mul-
tiple granularities remains to be realized.

To overcome this challenge, we model the sen-
tence granularity by the variance size of the latent
código. We assume that general sentences have more
uncertainty and are generated from a latent distri-
bution with a larger variance, analogous to Gauss-
ian word embedding (Vilnis and McCallum, 2015).
Based on this assumption, we represent the latent
code of topic sentences with Gaussian distribu-
ciones, where the parent Gaussian receives a larger
variance and represents a more generic topic sen-
tence than its children, as shown in Figure 1. A
obtain the latent code characterized above, nosotros
introduce a recursive Gaussian mixture prior to
modeling the latent code of input sentences in
reviews. A recursive GMM consists of Gaussian
components that correspond to the nodes of the
topic-tree, and the child priors are set to the in-
ferred parent posterior. Because of this configu-
ration, the Gaussian distribution of higher topics
receives a larger variance and conveys more gen-
eral content than lower topics.

The contributions of our work are as follows:

• Experiments demonstrate that the generated
summaries are more informative and cover
more input content than the recent unsu-
pervised summarization (Braˇzinskas et al.,
2020).

2 Preliminaries

Bowman et al. (2016) adapted the variational
autoencoder (VAE; Kingma and Welling, 2014;
Rezende et al., 2014) to obtain the density-based
latent code of sentences. They assume the gener-
ative process of documents to be as follows:
For each document index d ∈ {1, . . . , D}:

For each sentence index s ∈ {1, . . . , Sd} in d:

1. Draw a latent code of the sentence xs ∈ Rn:

xs ∼ p(xs)

(1)

2. Draw a sentence ws:

ws|xs ∼ p(ws|xs) = RNN(xs)

(2)

(cid:2)

|w
Unsupervised Abstractive Opinion Summarization image
Unsupervised Abstractive Opinion Summarization image

Descargar PDF