CrossWOZ: A Large-Scale Chinese Cross-Domain

CrossWOZ: A Large-Scale Chinese Cross-Domain
Task-Oriented Dialogue Dataset

Qi Zhu1, Kaili Huang2, Zheng Zhang1, Xiaoyan Zhu1, Minlie Huang1∗

1Dept. of Computer Science and Technology, 1Institute for Artificial Intelligence,
1Beijing National Research Center for Information Science and Technology,
2Dept. of Industrial Engineering, Tsinghua University, Peking, China
{zhu-q18,hkl16,z-zhang15}@mails.tsinghua.edu.cn
{zxy-dcs,aihuang}@tsinghua.edu.cn

Abstrakt

To advance multi-domain (cross-domain) dia-
logue modeling as well as alleviate the short-
age of Chinese task-oriented datasets, Wir
propose CrossWOZ, the first large-scale Chinese
Cross-Domain Wizard-of-Oz task-oriented data-
set. It contains 6K dialogue sessions and
102K utterances for 5 domains,
einschließlich
hotel, restaurant, attraction, metro, and taxi.
Darüber hinaus, the corpus contains rich annotation
of dialogue states and dialogue acts on both
user and system sides. About 60% of the
dialogues have cross-domain user goals that
favor inter-domain dependency and encourage
natural transition across domains in conversa-
tion. We also provide a user simulator and
several benchmark models for pipelined task-
oriented dialogue systems, which will facilitate
researchers to compare and evaluate their
models on this corpus. The large size and rich
annotation of CrossWOZ make it suitable to
investigate a variety of tasks in cross-domain
dialogue modeling, such as dialogue state
Verfolgung, policy learning, user simulation, usw.

1 Einführung

Kürzlich, there have been a variety of task-oriented
dialogue models thanks to the prosperity of neural
architectures (Yao et al., 2013; Wen et al., 2015;
Mrkˇsi´c et al., 2017; Peng et al., 2017; Lei et al.,
2018; G¨ur et al., 2018). Jedoch, research is still
largely limited by the lack of large-scale high-
quality dialogue data. Many corpora have advanced
the research of task-oriented dialogue systems,
most of which are single domain conversations,
including ATIS (Hemphill et al., 1990), DSTC 2
(Henderson et al., 2014), Frames (El Asri et al.,
2017), KVRET (Eric et al., 2017), WOZ 2.0
(Wen et al., 2017), and M2M (Shah et al., 2018).

∗Corresponding author.

281

Despite the significant contributions to the
Gemeinschaft, these datasets are still limited in size,
language variation, or task complexity. Weiter-
mehr, there is a gap between existing dialogue
corpora and real-life human dialogue data. In
real-life conversations, it is natural for humans to
transition between different domains or scenarios
while still maintaining coherent contexts. Daher,
real-life dialogues are much more complicated
than those dialogues that are only simulated
within a single domain. To address this issue,
some multi-domain corpora have been proposed
(Budzianowski et al., 2018B; Rastogi et al.,
2019). The most notable corpus is MultiWOZ
(Budzianowski et al., 2018B), a large-scale multi-
domain dataset
that consists of crowdsourced
It contains 10K
human-to-human dialogues.
dialogue sessions and 143K utterances for 7
domains, with annotation of system-side dialogue
the state
states and dialogue acts. Jedoch,
annotations are noisy (Eric et al., 2019), and user-
side dialogue acts are missing. The dependency
across domains is simply embodied in imposing
the same pre-specified constraints on different
domains, such as requiring both a hotel and an
attraction to locate in the center of the town.

In comparison to the abundance of English
dialogue data, überraschenderweise, there is still no widely
recognized Chinese task-oriented dialogue corpus.
In diesem Papier, we propose CrossWOZ, a large-
scale Chinese multi-domain (cross-domain) Aufgabe-
oriented dialogue dataset. An dialogue example
is shown in Figure 1. We compare CrossWOZ to
other corpora in Tables 1 Und 2. Our dataset has
the following features comparing to other corpora
(particularly MultiWOZ (Budzianowski et al.,
2018B)):

1. The dependency between domains is more
challenging because the choice in one domain
will affect the choices in related domains

Transactions of the Association for Computational Linguistics, Bd. 8, S. 281–295, 2020. https://doi.org/10.1162/tacl a 00314
Action Editor: Bonnie Webber. Submission batch: 10/2019; Revision batch: 1/2020; Published 6/2020.
C(cid:13) 2020 Verein für Computerlinguistik. Distributed under a CC-BY 4.0 Lizenz.

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
T

A
C
l
/

l

A
R
T
ich
C
e

P
D

F
/

D
Ö

ich
/

.

1
0
1
1
6
2

/
T

l

A
C
_
A
_
0
0
3
1
4
1
9
2
3
5
2
5

/

/
T

l

A
C
_
A
_
0
0
3
1
4
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

in CrossWOZ. As shown in Figure 1 Und
Tisch 2, the hotel must be near the attraction
chosen by the user in previous turns, welche
requires more accurate context understanding.

2. It is the first Chinese corpus that contains
large-scale multi-domain task-oriented dia-
logues, consisting of 6K sessions and 102K
utterances for 5 domains (attraction, restau-
rant, hotel, metro, and taxi).

3. Annotation of dialogue states and dialogue
acts is provided for both the system side
and user side. The annotation of user states
enables us to track the conversation from
the user’s perspective and can empower
the development of more elaborate user
simulators.

In diesem Papier, we present

the process of
dialogue collection and provide detailed data
the corpus. Statistics show that
analysis of
our cross-domain dialogues are complicated. To
facilitate model comparison, benchmark models
are provided for different modules in pipelined
task-oriented dialogue systems, including natural
language understanding, dialogue state tracking,
dialogue policy learning, and natural language
Generation. We also provide a user simulator,
which will
Und
evaluation of dialogue models on this corpus.
The corpus and the benchmark models are
publicly available at https://github.com/
thu-coai/CrossWOZ.

the development

facilitate

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
T

A
C
l
/

l

A
R
T
ich
C
e

P
D

F
/

D
Ö

ich
/

.

1
0
1
1
6
2

/
T

l

A
C
_
A
_
0
0
3
1
4
1
9
2
3
5
2
5

/

/
T

l

A
C
_
A
_
0
0
3
1
4
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

2 Related Work

According to whether the dialogue agent is human
or machine, we can group the collection methods
of existing task-oriented dialogue datasets into
three categories. The first one is human-to-human
dialogues. One of the earliest and well-known is
the ATIS dataset (Hemphill et al., 1990) used this
setting, followed by El Asri et al. (2017), Eric et al.
(2017), Wen et al. (2017), Lewis et al. (2017),
Wei et al.
(2018), and Budzianowski et al.
(2018B). Though this setting requires many human
efforts, it can collect natural and diverse dialogues.
The second one is human-to-machine dialogues,
which need a ready dialogue system to converse
with humans. The famous Dialogue State Tracking
Challenges provided a set of human-to-machine
dialogue data (Williams et al., 2013; Henderson

Figur 1: A dialogue example. The user state is
initialized by the user goal: Finding an attraction and
one of its nearby hotels, then booking a taxi to commute
between these two places. In addition to expressing pre-
specified informable slots and filling in requestable
slots, users need to consider and modify cross-domain
informable slots (bold) that vary through conversation.
We only show a few turns (turn number on the left),
each with either user or system state of the current
Domain, which are shown above each utterance.

et al., 2014). The performance of the dialogue
largely influence the quality of
system will
dialogue data. The third one is machine-to-
machine dialogues. It needs to build both user and

282

Typ
Dataset
Language
Speakers
# Domains
# Dialogues
# Turns
Avg. domains
Avg. turns
# Slots
# Values

Single-domain goal

Multi-domain goal

DSTC2 WOZ 2.0 Frames KVRET M2M MultiWOZ Schema CrossWOZ

EN
H2M
1
1,612
23,354
1
14.5
8
212

EN
H2H
1
600
4,472
1
7.5
4
99

EN
H2H
1
1,369
19,986
1
14.6
61
3,871

EN
EN
H2H M2M

3
2,425
12,732
1
5.3
13
1363

2
1,500
14,796
1
9.9
14
138

EN
H2H
7
8,438
115,424
1.80
13.7
25
4,510

EN
M2M
16
16,142
329,964
1.84
20.4
214
14,139

CN
H2H
5
5,012
84,692
3.24
16.9
72
7,871

Tisch 1: Comparison of CrossWOZ to other task-oriented corpora (training set). H2H, H2M, Und
M2M represent human-to-human, human-to-machine, machine-to-machine respectively. The average
numbers of domains and turns are for each dialogue.

MultiWOZ

CrossWOZ

usr: I’m looking for a college type attraction. usr:

. . .

Hello, could you recommend an attraction with a rating of 4.5 or higher?

usr: I would like to visit in town centre please. sys:

. . .

Tiananmen, Gui Street, and Beijing Happy Valley are very nice places.

usr: Can you find an Indian restaurant for me usr:
that is also in the town centre?

Schema
usr: I want a hotel in San Diego and I want to
check out on Thursday next week.

sys:

usr:

. . .

usr: I need a one way flight to go there.

I like Beijing Happy Valley. What hotels are around this attraction?

There are many, such as hotel A, hotel B, and hotel C.

Great! I am planning to find a hotel to stay near the attraction.
Which one has a rating of 4 or higher and offers wake-up call service?

Tisch 2: Cross-domain dialog examples in MultiWOZ, Schema, and CrossWOZ. The value of cross-
domain constraints(bold) are underlined. Some turns are omitted to save space. Names of hotels are
replaced by A,B,C for simplicity. Cross-domain constraints are pre-specified in MultiWOZ and Schema,
while determined dynamically in CrossWOZ. In CrossWOZ, the choice in one domain will greatly affect
related domains.

(Peng et al., 2017)

system simulators to generate dialogue outlines,
Zu
then use templates
generate dialogues or
further use people to
paraphrase the dialogues to make them more
natürlich (Shah et al., 2018; Rastogi et al., 2019).
It needs much less human effort. Jedoch, Die
complexity and diversity of dialogue policy are
limited by the simulators. To explore dialogue
policy in multi-domain scenarios, and to collect
natural and diverse dialogues, we resort to the
human-to-human setting.

Most of the existing datasets only involve
single domain in one dialogue, except MultiWOZ
(Budzianowski et al., 2018B) and Schema (Rastogi
et al., 2019). The MultiWOZ dataset has attracted

much attention recently, due to its large size and
multi-domain characteristics. It is at least one
order of magnitude larger than previous datasets,
amounting to 8,438 dialogues and 115K turns in
the training set. It greatly promotes the research
on multi-domain dialogue modeling, wie zum Beispiel
policy learning (Takanobu et al., 2019), state
Verfolgung (Wu et al., 2019), and context-to-text
Generation (Budzianowski et al., 2018A). Kürzlich
the Schema dataset has been collected in a
machine-to-machine fashion, ergebend 16,142
dialogues and 330K turns for 16 domains in
the multi-domain
the training set. Jedoch,
dependency in these two datasets is only embodied
in imposing the same pre-specified constraints on

283

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
T

A
C
l
/

l

A
R
T
ich
C
e

P
D

F
/

D
Ö

ich
/

.

1
0
1
1
6
2

/
T

l

A
C
_
A
_
0
0
3
1
4
1
9
2
3
5
2
5

/

/
T

l

A
C
_
A
_
0
0
3
1
4
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

different domains, such as requiring a restaurant
and an attraction to locate in the same area, oder der
city of a hotel and the destination of a flight to be
the same (Tisch 2).

Tisch 1 presents a comparison between our
dataset with other task-oriented datasets. In com-
parison to MultiWOZ, our dataset has a com-
parable scale: 5,012 dialogues and 84K turns in
the training set. The average number of domains
and turns per dialogue are larger than those
of MultiWOZ, which indicates that our task is
more complex. The cross-domain dependency in
our dataset is natural and challenging. For exam-
Bitte, as shown in Table 2, the system needs to
recommend a hotel near the attraction chosen by
the user in previous turns. Daher, both system
recommendation and user selection will dynam-
ically impact the dialogue. We also allow the
same domain to appear multiple times in a user
goal since a tourist may want to go to more than
one attraction.

To better track the conversation flow and model
user dialogue policy, we provide annotation of
user states in addition to system states and
dialogue acts. While the system state tracks the
dialogue history, the user state is maintained by
the user and indicates whether the sub-goals have
been completed, which can be used to predict
user actions. This information will facilitate the
construction of the user simulator.

To the best of our knowledge, CrossWOZ is the
first large-scale Chinese dataset for task-oriented
dialogue systems, which will
largely alleviate
the shortage of Chinese task-oriented dialogue
corpora that are publicly available.

Information. Stattdessen, we can call the API
directly if necessary.

2. Goal Generation: A multi-domain goal
generator was designed based on the
database. The relation across domains is
captured in two ways. One is to constrain two
targets that locate near each other. The other
is to use a taxi or metro to commute between
two targets in HAR domains mentioned in
the context. To make workers understand
the task more easily, we crafted templates
to generate natural language descriptions for
each structured goal.

3. Dialogue Collection: Before the formal data
collection starts, we required the workers to
make a small number of dialogues and gave
them feedback about the dialogue quality.
Dann, well-trained workers were paired to
converse according to the given goals. Der
workers were also asked to annotate both
user states and system states.

4. Dialogue Annotation: We used some rules
to automatically annotate dialogue acts
according to user states, system states,
and dialogue histories. To evaluate the
quality of the annotation of dialogue acts
and states, three experts were employed to
manually annotate dialogue acts and states
für 50 dialogues. The results show that
our annotations are of high quality. Endlich,
each dialogue contains a structured goal, A
task description, user states, system states,
dialogue acts, and utterances.

3 Data Collection

3.1 Database Construction

Our corpus is to simulate scenarios where a
traveler seeks tourism information and plans her
or his travel in Beijing. Domains include hotel,
attraction, restaurant, metro, and taxi. The data
collection process is summarized as follows:

1. Database Construction: We crawled travel
information in Beijing from the Web,
including Hotel, Attraction, and Restaurant
domains
(hereafter we name the three
domains as HAR domains). Dann, we used
the metro information of entities in HAR
domains to build the metro database. For the
taxi domain, there is no need to store the

We collected 465 attractions, 951 restaurants, Und
1,133 hotels in Beijing from the Web. Some
statistics are shown in Table 3. There are three
types of slots for each entity: common slots
such as name and address; binary slots for
hotel services such as wake-up call; and nearby
attractions/restaurants/hotels slots that contain
nearby entities in the attraction, restaurant, Und
hotel domains. Because it is not usual to find
another nearby hotel in the hotel domain, we did
not collect such information. This nearby relation
allows us to generate natural cross-domain goals,
such as ‘‘find another attraction near the first
one’’ and ‘‘find a restaurant near the attraction’’.

284

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
T

A
C
l
/

l

A
R
T
ich
C
e

P
D

F
/

D
Ö

ich
/

.

1
0
1
1
6
2

/
T

l

A
C
_
A
_
0
0
3
1
4
1
9
2
3
5
2
5

/

/
T

l

A
C
_
A
_
0
0
3
1
4
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

Domain

Attract. Ausruhen.

Hotel

Id Domain

Slot

# Entities
# Slots
Avg. nearby attract.
Avg. nearby rest.
Avg. nearby hotels

465
9
4.7
6.7
2.1

951
10
3.3
4.1
2.4

1133
8 + 37∗
0.8
2.0

Tisch 3: Database statistics. ∗ indicates that there
Sind 37 binary slots for hotel services such as wake-
up call. The last three rows show the average
number of nearby attractions/restaurants/hotels for
each entity. We did not collect nearby hotels
information for the hotel domain.

Nearest metro stations of HAR entities form the
metro database. Im Gegensatz, we provided the
pseudo car type and plate number for the taxi
Domain.

3.2 Goal Generation

To avoid generating overly complex goals, jede
goal has at most five sub-goals. To generate
more natural goals, the sub-goals can be of the
same domain, such as two attractions near each
andere. The goal is represented as a list of (sub-
goal id, Domain, slot, value) tuples, named as
semantic tuples. The sub-goal
id is used to
distinguish sub-goals, which may be in the same
Domain. There are two types of slots: informable
the user
slots, which are the constraints that
needs to inform the system, and requestable
slots, which are the information that the user
needs to inquire from the system. Wie gezeigt in
Tisch 4, besides common informable slots (italic
Werte) whose values are determined before the
conversation, we specially design cross-domain
informable slots (bold values) whose values refer
to other sub-goals. Cross-domain informable slots
utilize sub-goal id to connect different sub-goals.
Thus the actual constraints vary according to the
different contexts instead of being pre-specified.
The values of common informable slots are
sampled randomly from the database. Based on
the informable slots, users are required to gather
the values of requestable slots (blank values in
Tisch 4) through conversation.

There are four steps in goal generation. Erste, Wir
generate independent sub-goals in HAR domains.
For each domain in HAR domains, with the same
probability P we generate a sub-goal, while with

1 Attraction
1 Attraction
1 Attraction
2 Hotel
2 Hotel
2 Hotel
Taxi
3
Taxi
3
Taxi
3
Taxi
3

fee
name
nearby hotels
name
wake-up call
rating
aus
Zu
car type
plate number

Wert

frei

near (id = 1)
Ja

(id = 1)
(id = 2)

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
T

A
C
l
/

l

A
R
T
ich
C
e

P
D

F
/

D
Ö

ich
/

.

1
0
1
1
6
2

/
T

l

A
C
_
A
_
0
0
3
1
4
1
9
2
3
5
2
5

/

/
T

l

A
C
_
A
_
0
0
3
1
4
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

Tisch 4: A user goal example (translated into
English). Slots with bold/italic/blank value are
cross-domain informable slots, common inform-
able slots, and requestable slots. In this example,
the user wants to find an attraction and one of
its nearby hotels, then book a taxi to commute
between these two places.

the probability of 1 − P we do not generate
any sub-goal for this domain. Each sub-goal has
common informable slots and requestable slots.
As shown in Table 5, all slots of HAR domains
can be requestable slots, while the slots with an
asterisk can be common informable slots.

Zweite, we generate cross-domain sub-goals
in HAR domains. For each generated sub-goal
(z.B., the attraction sub-goal in Table 4), if its
requestable slots contain ‘‘nearby hotels’’, Wir
generate an additional sub-goal in the hotel domain
(z.B., the hotel sub-goal in Table 4) mit dem
probability of Pattraction→hotel. Natürlich, Die
selected hotel must satisfy the nearby relation to
the attraction entity. Ähnlich, we do not generate
any additional sub-goal in the hotel domain with
the probability of 1 − Pattraction→hotel. This also
works for the attraction and restaurant domains.
Photel→hotel = 0 because we do not allow the user
to find the nearby hotels of one hotel.

Dritte, we generate sub-goals in the metro and
taxi domains. With the probability of Ptaxi, Wir
generate a sub-goal in the taxi domain (z.B., Die
taxi sub-goal in Table 4) to commute between
two entities of HAR domains that are already
generated. It is similar for the metro domain and
we set Pmetro = Ptaxi. All slots in the metro or
taxi domain appear in the sub-goals and must be
filled. As shown in Table 5, from and to slots are

285

Attraction domain
name∗, rating∗, fee∗, duration∗, address, Telefon,
nearby attract., nearby rest., nearby hotels

Restaurant domain
name∗, rating∗, cost∗, dishes∗, address, Telefon,
offen, nearby attract., nearby rest., nearby hotels

Hotel domain
name∗, rating∗, price∗, type∗, 37 services∗,
Telefon, address, nearby attract., nearby rest.

Taxi domain
aus, Zu, car type, plate number

Metro domain
aus, Zu, from station, to station

Tisch 5: All slots in each domain (übersetzt
into English). Slots in bold can be cross-domain
informable slots. Slots with asterisk are inform-
able slots. All slots are requestable slots except
‘‘from’’ and ‘‘to’’ slots in the taxi and metro
domains. The ‘‘nearby attractions/restaurants/
hotels’’ slots and the ‘‘dishes’’ slot can be multiple
valued (a list). The value of each ‘‘service’’ is
either yes or no.

always cross-domain informable slots, wohingegen
others are always requestable slots.

Last, we rearrange the order of the sub-goals to
generate more natural and logical user goals. Wir
require that a sub-goal should be followed by its
referred sub-goal as immediately as possible.

To make the workers aware of this cross-domain
feature, we additionally provide a task description
for each user goal in natural language, welches ist
generated from the structured goal by hand-crafted
templates.

Compared with the goals whose constraints are
all pre-specified, our goals impose much more
dependency between different domains, welche
will significantly influence the conversation. Der
exact values of cross-domain informable slots
are finally determined according to the dialogue
Kontext.

3.3 Dialogue Collection

We developed a specialized website that allows
two workers to converse synchronously and make
annotations online. On the website, workers are
free to choose one of the two roles: tourist (user)

or system (wizard). Dann, two paired workers are
sent to a chatroom. The user needs to accomplish
the allocated goal
through conversation while
the wizard searches the database to provide the
necessary information and gives responses. Vor
the formal data collection, we trained the workers
to complete a small number of dialogues by giving
them feedback. Endlich, 90 well-trained workers
participated in the data collection.

Im Gegensatz, MultiWOZ (Budzianowski et al.,
2018B) hired more than a thousand workers to
converse asynchronously. Each worker received a
dialogue context to review and had to respond for
only one turn at a time. The collected dialogues
may be incoherent because workers may not
understand the context correctly and multiple
workers contributed to the same dialogue session,
possibly leading to more variance in the data qual-
ität. Zum Beispiel, some workers expressed two
mutually exclusive constraints in two consecutive
user turns and failed to eliminate the system’s
confusion in the next several turns. Compared
with MultiWOZ, our synchronous conversation
setting may produce more coherent dialogues.

3.3.1 User Side
The user state is the same as the user goal before
a conversation starts. At each turn, the user needs
Zu 1) modify the user state according to the system
response at the preceding turn, 2) select some
semantic tuples in the user state, which indicates
the dialogue acts, Und 3) compose the utterance
according to the selected semantic tuples. In
addition to filling the required values and updating
cross-domain informable slots with real values in
the user state, the user is encouraged to modify
the constraints when there is no result under such
constraints. The change will also be recorded in
the user state. Once the goal is completed (all the
values in the user state are filled), the user can
terminate the dialogue.

3.3.2 Wizard Side
We regard the database query as the system
state, which records the constraints of each
domain till the current turn. At each turn, Die
wizard needs to 1) fill the query according to the
previous user response and search the database if
necessary, 2) select the retrieved entities, Und
3) respond in natural
language based on the
information of the selected entities. If none of the

286

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
T

A
C
l
/

l

A
R
T
ich
C
e

P
D

F
/

D
Ö

ich
/

.

1
0
1
1
6
2

/
T

l

A
C
_
A
_
0
0
3
1
4
1
9
2
3
5
2
5

/

/
T

l

A
C
_
A
_
0
0
3
1
4
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

entities satisfy all the constraints, the wizard will
try to relax some of them for a recommendation,
resulting in multiple queries. The first query
records original user constraints while the last
one records the constraints relaxed by the system.

3.4 Dialogue Annotation

After collecting the conversation data, we used
some rules to annotate dialogue acts automati-
cally. Each utterance can have several dialogue
acts. Each dialogue act is a tuple that consists of
intent, Domain, slot, and value. We pre-define 6
types of intents and use the update of the user state
and system state as well as keyword matching to
obtain dialogue acts. For the user side, dialogue
acts are mainly derived from the selection of
semantic tuples that contain the information of
Domain, slot, and value. Zum Beispiel,
Wenn (1,
Attraction, fee, frei) in Table 4 is selected by
the user, Dann (Inform, Attraction, fee, frei) Ist
) is selected,
labelled. Wenn (1, Attraction, name,
Dann (Request, Attraction, name, none) is labeled.
Wenn (2, Hotel, name, near (id=1)) is selected, Dann
(Select, Hotel, src domain, Attraction) is labeled.
This intent is specially designed for the ‘‘nearby’’
constraint. For the system side, we mainly applied
keyword matching to label dialogue acts. Inform
intent is derived by matching the system utterance
with the information of selected entities. Wann
the wizard selects multiple retrieved entities and
recommend them, Recommend intent is labeled.
When the wizard expresses that no result satisfies
user constraints, NoOffer is labeled. For General
intents such as ‘‘goodbye’’, ‘‘thanks’’ at both user
and system sides, keyword matching is applied.

We also obtained a binary label for each seman-
tic tuple in the user state, which indicates whether
this semantic tuple has been selected to be
expressed by the user. This annotation directly
illustrates the progress of the conversation.

To evaluate the quality of the annotation of
dialogue acts and states (both user and system
Staaten), three experts were employed to manually
annotate dialogue acts and states for the same 50
dialogues (806 utterances), 10 for each goal type
(see Section 4). Because dialogue act annotation is
not a classification problem, we didn’t use Fleiss’
kappa to measure the agreement among experts.
We used dialogue act F1 and state accuracy to
measure the agreement between each two ex-
perts’ annotations. The average dialogue act F1 is

Train

Valid

Test

# Dialogues
# Turns
# Tokens
Vocab
Avg. sub-goals
Avg. STs
Avg. turns
Avg. tokens

5,012
84,692
1,376,033
12,502
3.24
14.8
16.9
16.3

500
8,458
137,736
5,202
3.26
14.9
16.9
16.3

500
8,476
137,427
5,143
3.26
15.0
17.0
16.2

Tisch 6: Data statistics. The average numbers
of sub-goals, turns, and STs (semantic tuples)
are for each dialogue. The average number of
tokens is for each turn.

94.59% and the average state accuracy is 93.55%.
We then compared our annotations with each
expert’s annotations, which are regarded as gold
standard. The average dialogue act F1 is 95.36%
and the average state accuracy is 94.95%, welche
indicates the high quality of our annotations.

4 Statistics

After removing uncompleted dialogues, we collec-
ted 6,012 dialogues in total. The dataset is split
randomly for training/validation/test, bei dem die
statistics are shown in Table 6. The average
number of sub-goals in our dataset
Ist 3.24,
which is much larger than that in MultiWOZ
(1.80) (Budzianowski et al., 2018B) and Schema
(1.84) (Rastogi et al., 2019). The average number
of turns (16.9) is also larger than that in MultiWOZ
(13.7). These statistics indicate that our dialogue
data are more complex.

According to the type of user goal, we group the

dialogues in the training set into five categories:

Single-domain (S) 417 dialogues have only one

sub-goal in HAR domains.

Independent multi-domain (M)1,573 dialogues
have multiple sub-goals (2∼3) in HAR do-
mains. Jedoch, these sub-goals do not have
cross-domain informable slots.

Independent multi-domain + traffic (M+T) 691
dialogues have multiple sub-goals in HAR
domains and at least one sub-goal in the
metro or taxi domain (3∼5 sub-goals). Der
sub-goals in HAR domains do not have
cross-domain informable slots.

287

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
T

A
C
l
/

l

A
R
T
ich
C
e

P
D

F
/

D
Ö

ich
/

.

1
0
1
1
6
2

/
T

l

A
C
_
A
_
0
0
3
1
4
1
9
2
3
5
2
5

/

/
T

l

A
C
_
A
_
0
0
3
1
4
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
8
S
e
P
e
M
B
e
R
2
0
2
3

Goal type

S M M+T CM CM+T

417 1573 691 1759 572
# Dialogues
0.10 0.22 0.22 0.61 0.55
NoOffer rate
0.06 0.07 0.07 0.14 0.12
Multi-query rate
0.10 0.28 0.31 0.69 0.63
Goal change rate
Avg. dialogue acts 1.85 1.90 2.09 2.06 2.11
1.00 2.49 3.62 3.87 4.57
Avg. sub-goals
4.5 11.3 15.8 18.2 20.7
Avg. STs
6.8 13.7 16.0 21.0 21.6
Avg. turns
13.2 15.2 16.3 16.9 17.0
Avg. tokens

Tisch 7: Statistics for dialogues of different goal
types in the training set. NoOffer rate and Goal
change rate are for each dialogue. Multi-query rate
is for each system turn. The average number of
dialogue acts is for each turn.

Cross multi-domain (CM) 1,759 dialogues have
multiple sub-goals (2∼5) in HAR domains
with cross-domain informable slots.

Cross multi-domain + traffic (CM+T) 572 dia-
logues have multiple sub-goals in HAR
domains with cross-domain informable slots
and at least one sub-goal in the metro or taxi
Domain (3∼5 sub-goals).

The data statistics are shown in Table 7. Als
mentioned in Section 3.2, we generate indepen-
dent multi-domain, cross multi-domain, and traffic
domain sub-goals one by one. Thus in terms of
the task complexity, we have SM and Mwhich is supported by the average number of sub-goals, semantic tuples, and turns per dialogue in Table 7. The average number of tokens also becomes larger when the goal becomes more complex. About 60% of dialogues (M+T, CM, and CM+T) have cross- domain informable slots. Because of the limit of maximal sub-goals number, the ratio of dialogue number of CM+T to CM is smaller than that of M+T to M. CM and CM+T are much more challenging than other tasks because additional cross-domain constraints in HAR domains are strict and will result in more ‘‘NoOffer’’ situations (d.h., the wizard finds no result that satisfies the current constraints). In dieser Situation, the wizard will try to relax some constraints and issue multiple queries to find some results for a recommendation while the user will compromise and change the original 288 Figur 2: Distributions of dialogue length for different goal types in the training set. Ziel. The negotiation process is captured by ‘‘NoOffer rate’’, ‘‘Multi-query rate’’, and ‘‘Goal change rate’’ in Table 7. Zusätzlich, ‘‘Multi- query rate’’ suggests that each sub-goal in M and M+T is as easy to finish as the goal in S. The distribution of dialogue length is shown in Figure 2, which is an indicator of the task complexity. Most single-domain dialogues termi- nate within 10 turns. The curves of M and M+T are almost of the same shape, which implies that the traffic task requires two additional turns on average to complete the task. The curves of CM and CM+T are less similar. This is probably because CM goals that have 5 sub-goals (um 22%) can not further generate a sub-goal in traffic domains and become CM+T goals. 5 Corpus Features Our corpus is unique in the following aspects: • Complex user goals are designed to favor inter-domain dependency and natural transi- tion between multiple domains. Im Gegenzug, the collected dialogues are more complex and natural for cross-domain dialogue tasks. • A well-controlled, synchronous setting is applied to collect human-to-human dia- logues. This ensures the high quality of the collected dialogues. • Explicit annotations are provided at not only the system side but also the user side. This feature allows us to model user behaviors or develop user simulators more easily. l D O w N O A D e D F R O M H T T P : / / D ich R e C T . M ich T . e d u / t a c l / l A R T ich C e - P D F / d o i / . 1 0 1 1 6 2 / t l a c _ a _ 0 0 3 1 4 1 9 2 3 5 2 5 / / t l a c _ a _ 0 0 3 1 4 P D . F B j G u e S T T O N 0 8 S e P e M B e R 2 0 2 3 inform and recommend intents domain=Attraction, ization and then fine-tune the parameters on CrossWOZ. We obtain word embeddings and the sentence representation (embedding of [CLS]) from BERT. Because there may exist more than one intent in an utterance, we modify the tradi- tional method accordingly. For dialogue acts such as of (intent=Inform, slot=fee, value=free) whose values appear in the sentence, we perform sequential labeling using an MLP which takes word embeddings (‘‘free’’) as input and outputs tags in BIO schema (‘‘B-Inform- Attraction-fee’’). For each of the other dialogue acts (z.B., (intent=Request, domain=Attraction, slot=fee)) that do not have actual values, we use another MLP to perform binary classification on the sentence representation to predict whether the sentence should be labeled with this dialogue act. To incorporate context information, we use the same BERT to get the embedding of last three utterances. We separate the utterances with [SEP] tokens and insert a [CLS] token at the beginning. Then each original input of the two MLP is concatenated with the context embedding (embedding of [CLS]), serving as the new input. We also conducted an ablation test by removing context information. We trained models with both system-side and user-side utterances. Result Analysis: The results of the dialogue act prediction (F1 score) sind in der Tabelle aufgeführt 8. We further tested the performance on different intent types, as shown in Table 9. Allgemein, BERTNLU performs well with context information. The performance on cross multi-domain dialogues (CM and CM+T) drops slightly, which may be due to the decrease of ‘‘General’’ intent and the increase of ‘‘NoOffer’’ as well as ‘‘Select’’ intent in the dialogue data. We also noted that the F1 score of ‘‘Select’’ intent is remarkably lower than those of other types, but context information can improve the performance significantly. Because recognizing domain transition is a key factor for a cross-domain dialogue system, natural language understanding models need to utilize context information more effectively. 6.2 Dialogue State Tracking Task: Dialogue state tracking is responsible for recognizing user goals from the dialogue context and then encoding the goals into the pre-defined Figure 3: Pipelined user simulator (links) and Pipelined task-oriented dialogue system (Rechts). Solid connec- tions are for natural language level interaction, and dashed connections are for dialogue act level. The con- nections without comments represent dialogue acts. 6 Benchmark and Analysis CrossWOZ can be used in different tasks or settings of a task-oriented dialogue system. To facilitate further research, we provide bench- mark models for different components of a pipe- lined task-oriented dialogue system (Figur 3), including natural language understanding (NLU), dialogue state tracking (DST), dialogue policy learning, and natural language generation (NLG). These models are implemented using ConvLab-2 (Zhu et al., 2020), an open-source task-oriented dialog system toolkit. We also provide a rule- based user simulator, which can be used to train dialogue policy and generate simulated dialogue data. The benchmark models and simulator will greatly facilitate researchers to compare and eval- uate their models on our corpus. 6.1 Natural Language Understanding Task: The natural language understanding com- ponent in a task-oriented dialogue system takes an utterance as input and outputs the corresponding semantic representation, nämlich, a dialogue act. The task can be divided into two sub-tasks: intent classification that decides the intent type of an utterance, and slot tagging which identifies the value of a slot. Modell: We adapted BERTNLU from ConvLab- 2. BERT (Devlin et al., 2019) has shown strong performance in many NLP tasks. We use Chinese pre-trained BERT1 (Cui et al., 2019) for initial- 1BERT-wwm-ext model in https://github.com/ ymcui/Chinese-BERT-wwm. 289 l D O w N O A D e D F R O M H T T P : / / D ich R e C T . M ich T . e d u / t a c l / l A R T ich C e - P D F / d o i / . 1 0 1 1 6 2 / t l a c _ a _ 0 0 3 1 4 1 9 2 3 5 2 5 / / t l a c _ a _ 0 0 3 1 4 P D . F B j G u e S T T O N 0 8 S e P e M B e R 2 0 2 3 S M M+T CM CM+T Overall BERTNLU – context RuleDST TRADE SL policy Simulator Dialogue act F1 96.69 96.01 96.15 94.99 95.38 94.55 93.05 93.70 90.66 90.82 Joint state accuracy (single turn) 84.17 78.17 81.93 63.38 67.86 71.67 45.29 37.98 30.77 25.65 Joint state accuracy Dialogue act F1 Dialogue act F1 (delex) 50.28 44.97 54.01 41.65 44.02 67.96 67.35 73.94 62.27 66.29 Joint state accuracy (single turn) 63.53 48.79 50.26 40.66 41.76 85.99 81.39 80.82 75.27 77.23 Dialogue act F1 (single turn) DA Sim NL Sim (Template) NL Sim (SC-LSTM) Task finish rate 76.5 67.4 60.6 49.4 33.3 27.1 33.7 29.1 23.1 17.2 10.0 8.8 15.7 10.0 9.0 95.53 91.85 71.33 36.08 44.92 66.02 45.00 78.39 34.6 23.6 19.7 Tisch 8: Performance of Benchmark models. ‘‘Single turn’’ means having the gold information of the last turn. Task finish rate is evaluated on 1000 times simulations for each goal type. It’s worth noting that ‘‘task finish’’ does not mean the task is successful, because the system may provide wrong information. Results show that cross multi-domain dialogues (CM and CM+T) is challenging for these tasks. BERTNLU – context General 99.45 99.69 Inform 94.67 90.80 Request 96.57 91.98 Recom 98.41 96.92 NoOffer 93.87 93.05 Select 82.25 68.40 Tisch 9: F1 score of different intent type. ‘‘Recom.’’ represents ‘‘Recommend’’. system state. Traditional state tracking models take as input user dialogue acts parsed by natural language understanding modules, while recently there are joint models that obtain the system state directly from the context. Modell: We implemented a rule-based model (RuleDST) and adapted TRADE (Transferable Dialogue State Generator)2 (Wu et al., 2019) in this experiment. RuleDST takes as input the previous system state and the last user dialogue acts. Dann, the system state is updated according to hand-crafted rules. Zum Beispiel, If one of user dialogue acts is (intent=Inform, domain=Attraction, slot=fee, value=free), then the value of the ‘‘fee’’ slot in the attraction domain will be filled with ‘‘free’’. TRADE generates the system state directly from all the previous utterances using a copy mechanism. As mentioned in Section 3.3.2, the first query of the system often records full user constraints, while the last one records relaxed constraints for recommendation. Thus the last one involves system policy, which 2https://github.com/jasonwu0731/trade- dst. is out of the scope of state tracking. We used the first query for these models and left state tracking with recommendation for future work. Result Analysis: We evaluated the joint state accuracy (percentage of exact matching) of these two models (Tisch 8). TRADE, the state-of-the- art model on MultiWOZ, performs poorly on our dataset, indicating that more powerful state trackers are necessary. At the test stage, RuleDST can access the previous gold system state and user dialogue acts, which leads to higher joint state accuracy than TRADE. Both models perform worse on cross multi-domain dialogues (CM and CM+T). To evaluate the ability of modeling cross- domain transition, we further calculated joint state accuracy for those turns that receive ‘‘Select’’ intent from users (z.B., ‘‘Find a hotel near the attraction’’). The performances are 11.6% Und 12.0% for RuleDST and TRADE respectively, showing that they are not able to track domain transition well. 6.3 Dialogue Policy Learning Task: Dialogue policy receives state s and out- puts system action a at each turn. Compared 290 l D O w N O A D e D F R O M H T T P : / / D ich R e C T . M ich T . e d u / t a c l / l A R T ich C e - P D F / d o i / . 1 0 1 1 6 2 / t l a c _ a _ 0 0 3 1 4 1 9 2 3 5 2 5 / / t l a c _ a _ 0 0 3 1 4 P D . F B j G u e S T T O N 0 8 S e P e M B e R 2 0 2 3 with the state given by a dialogue state tracker, s may have more information, such as the last user dialogue acts and the entities provided by the backend database. Modell: We adapted a vanilla policy trained in a supervised fashion from ConvLab-2 (SL policy). The state s consists of the last system dialogue acts, last user dialogue acts, system state of the current turn, the number of entities that satisfy the con- straints in the current domain, and a terminal signal indicating whether the user goal is completed. The action a is delexicalized dialogue acts of current turn which ignores the exact values of the slots, where the values will be filled back after prediction. Result Analysis: As illustrated in Table 8, there is a large gap between F1 score of exact dialogue act and F1 score of delexicalized dialogue act, which means we need a powerful system state tracker to find correct entities. The result also shows that cross multi-domain dialogues (CM and CM+T) are harder for system dialogue act prediction. Zusätzlich, when there is ‘‘Select’’ intent in preceding user dialogue acts, the F1 score of exact dialogue act and delexicalized dialogue act are 41.53% Und 54.39% jeweils. This shows that the policy performs poorly for cross-domain transition. 6.4 Natural Language Generation Task: Natural language generation transforms a structured dialogue act into a natural language sentence. It usually takes delexicalized dialogue acts as input and generates a template-style sen- tence that contains placeholders for slots. Dann, the placeholders will be replaced by the exact values, which is called lexicalization. Modell: We provided a template-based model (named TemplateNLG) and SC-LSTM (Semanti- cally Conditioned LSTM) (Wen et al., 2015) for natural language generation. For TemplateNLG, we extracted templates from the training set and manually added some templates for infrequent dialogue acts. For SC-LSTM we adapted the implementation3 on MultiWOZ and trained two SC-LSTM with system-side and user-side utterances respectively. 3https://github.com/andy194673/nlg-sclstm- multiwoz. Input: (Inform, Restaurant, name, $name) (Inform, Restaurant, cost, $cost) SC-LSTM: $name, $cost. I Recommend you $name. It costs $cost. TemplateNLG: 1)$name $cost. $name is a nice choice. But it costs $cost. 2) $name, The dish you want doesn’t cost so much. I recommend you $name. It costs $cost. Tisch 10: Comparison of SC-LSTM and Template- NLG. The input is delexicalized dialogue acts, where the actual values are replaced with $name and $cost. Two retrieved results are shown for TemplateNLG. Result Analysis: We calculated corpus-level BLEU as used by Wen et al. (2015). We took all utterances with the same delexicalized dialogue acts as references (100 references on average), which results in high BLEU score. For user-side utterances, the BLEU score for TemplateNLG is 0.5780, while the BLEU score for SC-LSTM is 0.7858. For system-side, the two scores are 0.6828 Und 0.8595. As exemplified in Table 10, the gap between the two models can be attributed to that SC-LSTM generates common pattern while TemplateNLG retrieves original sentence which has more specific information. We do not provide BLEU scores for different goal types (nämlich, S, M, CM, usw.) because BLEU scores on different corpus are not comparable. 6.5 User Simulator Task: A user simulator imitates the behavior of users, which is useful for dialogue policy learning and automatic evaluation. A user simulator at dialogue act level (z.B., the ‘‘Usr Policy’’ in Figure 3) receives the system dialogue acts and outputs user dialogue acts, while a user simulator at natural language level (z.B., the left part in Figure 3) directly takes system’s utterance as input and outputs user’s utterance. 291 l D O w N O A D e D F R O M H T T P : / / D ich R e C T . M ich T . e d u / t a c l / l A R T ich C e - P D F / d o i / . 1 0 1 1 6 2 / t l a c _ a _ 0 0 3 1 4 1 9 2 3 5 2 5 / / t l a c _ a _ 0 0 3 1 4 P D . F B j G u e S T T O N 0 8 S e P e M B e R 2 0 2 3 the user state Model: We built a rule-based user simulator that works at dialogue act level. Different from agenda- based (Schatzmann et al., 2007) user simulator that maintains a stack-like agenda, our simulator maintains straightforwardly (Abschnitt 3.3.1). The simulator will generate a user goal as described in Section 3.2. At each user turn, the simulator receives system dialogue acts, modifies its state, and outputs user dialogue acts according to some hand-crafted rules. For example, if the system inform the simulator that the attraction is free, then the simulator will fill the ‘‘fee’’ slot in the user state with ‘‘free’’, and ask for the next empty slot such as ‘‘address’’. The simulator terminates when all requestable slots are filled, and all cross-domain informable slots are filled by real values. Result Analysis: During the evaluation, we initialized the user state of the simulator using the previous gold user state. The input to the simulator is the gold system dialogue acts. We used joint state accuracy (percentage of exact matching) to evaluate user state prediction and F1 score to evaluate the prediction of user dialogue acts. The results are presented in Table 8. We can observe that the performance on complex dialogues (CM and CM+T) is remarkably lower than that on simple ones (S, M, and M+T). This simple rule- based simulator is provided to facilitate dialogue policy learning and automatic evaluation, and our corpus supports the development of more elaborated simulators as we provide the annotation of user-side dialogue states and dialogue acts. 6.6 Evaluation with User Simulation In addition to corpus-based evaluation for each module, we also evaluated the performance of a whole dialogue system using the user simulator as described above. Three configurations were explored: DA Sim Simulation at dialogue act level. As shown by the dashed connections in Figure 3, we used the aforementioned simulator at the user side and assembled the dialogue system with RuleDST and SL policy. NL Sim (Template) Simulation at natural lan- guage level using TemplateNLG. As shown by the solid connections in Figure 3, the simu- lator and the dialogue system were equipped 292 with BERTNLU and TemplateNLG addi- tionally. NL Sim (SC-LSTM) Simulation at natural lan- guage level using SC-LSTM. TemplateNLG was replaced with SC-LSTM in the second configuration. When all the slots in a user goal are filled by real values, the simulator terminates. This is regarded as ‘‘task finish’’. It’s worth noting that ‘‘task finish’’ does not mean the task is success, because the system may provide wrong information. We calculated ‘‘task finish rate’’ on 1,000 simulations for each goal type (See Table 8). Findings are summarized below: 1. Cross multi-domain tasks (CM and CM+T) are much harder to finish. Comparing M and M+T, although each module performs well in traffic domains, additional sub-goals in these domains are still difficult to accomplish. 2. The system-level performance is largely limited by RuleDST and SL policy. Although the corpus-based performance of NLU and NLG modules is high, the two modules still harm the performance. Thus more powerful models are needed for all components of a pipelined dialogue system. 3. TemplateNLG has a much lower BLEU score but performs better than SC-LSTM in natural language level simulation. This may be attributed to that BERTNLU prefers templates retrieved from the training set. 7 Conclusion In this paper, we present the first large-scale Chinese Cross-Domain task-oriented dialogue dataset, CrossWOZ. It contains 6K dialogues and 102K utterances for 5 domains, with the annotation of dialogue states and dialogue acts at both user and system sides. About 60% of the dialogues have cross-domain user goals, which transition between related encourage natural domains. Thanks to the rich annotation of dialogue states and dialogue acts at both user side and system side, this corpus provides a new testbed for a wide range of tasks to investigate cross- domain dialogue modeling, such as dialogue state tracking, policy learning, und so weiter. l D O w N O A D e D F R O M H T T P : / / D ich R e C T . M ich T . e d u / t a c l / l A R T ich C e - P D F / d o i / . 1 0 1 1 6 2 / t l a c _ a _ 0 0 3 1 4 1 9 2 3 5 2 5 / / t l a c _ a _ 0 0 3 1 4 P D . F B j G u e S T T O N 0 8 S e P e M B e R 2 0 2 3 Our experiments show that the cross-domain constraints are challenging for all these tasks. The transition between related domains is especially challenging to model. Besides corpus-based component-wise evaluation, we also performed system-level evaluation with a user simulator, which requires more powerful models for all components of a pipelined cross-domain dialogue system. Layla El Asri, Hannes Schulz, Shikhar Sharma, Jeremie Zumer, Justin Harris, Emery Fine, Rahul Mehrotra, and Kaheer Suleman. 2017. Frames: a corpus for adding memory to goal- oriented dialogue systems. In Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue, pages 207–219. Saarbr¨ucken, Deutschland. Association for Computational Linguistics. Acknowledgments This work was supported by the National Science Foundation of China (grant no. 61936010/ 61876096) and the National Key R&D Program of China (grant no. 2018YFC0830200). We would like to thank THUNUS NExT JointLab for the support. We would also like to thank Ryuichi Takanobu and Fei Mi for their constructive com- gen. We are grateful to our action editor, Bonnie Webber, and the anonymous reviewers for their valuable suggestions and feedback. References Pawel Budzianowski, I˜nigo Casanueva, Bo- Hsiang Tseng, and Milica Gasic. 2018A. Towards end-to-end multi-domain dialogue modelling. Paweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, I˜nigo Casanueva, Stefan Ultes, Osman Ramadan, and Milica Gaˇsi´c. 2018B, MultiWOZ - a large-scale multi-domain wizard-of-Oz data- set for task-oriented dialogue modelling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Pro- Abschließen, pages 5016–5026. Brussels, Belgien. Verein für Computerlinguistik. Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, and Guoping Hu. 2019. Pre-training with whole word masking for chinese bert. arXiv preprint arXiv:1906.08101. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019, Jun. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technolo- gies, Volumen 1 (Long and Short Papers), pages 4171–4186. Minneapolis, Minnesota. Verein für Computerlinguistik. Mihail Eric, Rahul Goel, Shachi Paul, Abhishek Sethi, Sanchit Agarwal, Shuyag Gao, and Dilek Hakkani-Tur. 2019. Multiwoz 2.1: Multi-domain dialogue state corrections and state tracking baselines. arXiv preprint arXiv:1907.01669. Mihail Eric, Lakshmi Krishnan, Francois Charette, and Christopher D. Manning. 2017. Key-value retrieval networks for task-oriented dialogue. In Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue, pages 37–49. Saarbr¨ucken, Deutschland. Associa- tion for Computational Linguistics. Izzeddin G¨ur, Dilek Hakkani-T¨ur, Gokhan T¨ur, and Pararth Shah. 2018. User modeling for task oriented dialogues. In 2018 IEEE Spoken Language Technology Workshop (SLT), pages 900–906. IEEE. Charles T. Hemphill, John J. Godfrey, and George R. Doddington. 1990. The ATIS spoken language systems pilot corpus. In Speech and Natural Language: Proceedings of a Workshop Held at Hidden Valley, Pennsylvania, Juni 24-27,1990. Matthew Henderson, Blaise Thomson, and Jason D. Williams. 2014, The second dialog state tracking challenge. In Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), pages 263–272. Philadelphia, PA. Association for Computational Linguistics. Wenqiang Lei, Xisen Jin, Min-Yen Kan, Zhaochun Ren, Xiangnan He, and Dawei Yin. 2018. Sequicity: Simplifying task-oriented dialogue systems with single sequence-to- In Proceedings of sequence architectures. the 56th Annual Meeting of the Association for Computational Linguistics (Volumen 1: Long Papers), pages 1437–1447. Melbourne, Australia. Association for Computational Linguistics. 293 l D O w N O A D e D F R O M H T T P : / / D ich R e C T . M ich T . e d u / t a c l / l A R T ich C e - P D F / d o i / . 1 0 1 1 6 2 / t l a c _ a _ 0 0 3 1 4 1 9 2 3 5 2 5 / / t l a c _ a _ 0 0 3 1 4 P D . F B j G u e S T T O N 0 8 S e P e M B e R 2 0 2 3 Mike Lewis, Denis Yarats, Yann Dauphin, Devi Parikh, and Dhruv Batra. 2017. Deal or no deal? end-to-end learning of negotiation dialogues. In Proceedings of the 2017 Conference on Empir- ical Methods in Natural Language Processing, pages 2443–2453. Copenhagen, Denmark. Verein für Computerlinguistik. Nikola Mrkˇsi´c, Diarmuid ´O S´eaghdha, Tsung- Hsien Wen, Blaise Thomson, and Steve Young. 2017. Neural belief tracker: Data-driven dialogue state tracking. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volumen 1: Long Papers), pages 1777–1788. Vancouver, Kanada. Verein für Computerlinguistik. 2017. Composite Baolin Peng, Xiujun Li, Lihong Li, Jianfeng Gao, Asli Celikyilmaz, Sungjin Lee, and Kam-Fai Wong. task-completion dialogue policy learning via hierarchical deep reinforcement learning. In Proceedings of on Empirical Methods in Natural Language Processing, pages 2231–2240. Copenhagen, Denmark. Verein für Computerlinguistik. 2017 Conference the Abhinav Rastogi, Xiaoxue Zang, Srinivas Sunkara, Raghav Gupta, and Pranav Khaitan. 2019. Towards scalable multi-domain conversational agents: The schema-guided dialogue dataset. arXiv preprint arXiv:1909.05855. Jost Schatzmann, Blaise Thomson, Karl Weil- hammer, Hui Ye, and Steve Young. 2007. Agenda-based user simulation for bootstrap- ping a POMDP dialogue system. In Human Language Technologies 2007: The Conference of the North American Chapter of the Asso- ciation for Computational Linguistics; Com- panion Volume, Short Papers, pages 149–152. Rochester, New York. Association for Compu- tational Linguistics. Pararth Shah, Dilek Hakkani-T¨ur, Gokhan T¨ur, Abhinav Rastogi, Ankur Bapna, Neha Nayak, and Larry Heck. 2018. Building a conversa- tional agent overnight with dialogue self-play. arXiv preprint arXiv:1801.04871. Ryuichi Takanobu, Hanlin Zhu, and Minlie Huang. 2019. Guided dialog policy learning: Reward estimation for multi-domain task- oriented dialog. In Proceedings of the 2019 294 Conference on Empirical Methods in Natural Language Processing and the 9th Interna- tional Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 100–110. Hongkong, China. Association for Computa- tional Linguistics. Zhongyu Wei, Qianlong Liu, Baolin Peng, Huaixiao Tou, Ting Chen, Xuanjing Huang, Kam-fai Wong, and Xiangying Dai. 2018. Task-oriented dialogue system for automatic diagnosis. In Proceedings of the 56th Annual the Association for Computa- Meeting of tional Linguistics (Volumen 2: Short Papers), pages 201–207. Melbourne, Australia. Associ- ation for Computational Linguistics. Tsung-Hsien Wen, Milica Gaˇsi´c, Nikola Mrkˇsi´c, Pei-Hao Su, David Vandyke, and Steve Young. 2015. Semantically conditioned LSTM-based natural language generation for spoken dia- logue systems. In Proceedings of the 2015 Conference on Empirical Methods in Natu- ral Language Processing, pages 1711–1721. Lisbon, Portugal. Association for Computa- tional Linguistics. Tsung-Hsien Wen, David Vandyke, Nikola Mrkˇsi´c, Milica Gaˇsi´c, Lina M. Rojas-Barahona, Pei- Hao Su, Stefan Ultes, and Steve Young. 2017. A network-based end-to-end trainable task- oriented dialogue system. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volumen 1, Long Papers, pages 438–449. Valencia, Spanien. Association for Computational Linguistics. Jason Williams, Antoine Raux, Deepak Ramachandran, and Alan Black. 2013. The dialog state tracking challenge. In Proceed- ings of the SIGDIAL 2013 Conference, pages 404–413. Metz, Frankreich. Association for Computational Linguistics. Chien-Sheng Wu, Andrea Madotto, Ehsan Hosseini-Asl, Caiming Xiong, Richard Socher, and Pascale Fung. 2019, Transferable multi- domain state generator for task-oriented dia- the 57th logue systems. In Proceedings of Annual Meeting of the Association for Computational Linguistics, pages 808–819. l D O w N O A D e D F R O M H T T P : / / D ich R e C T . M ich T . e d u / t a c l / l A R T ich C e - P D F / d o i / . 1 0 1 1 6 2 / t l a c _ a _ 0 0 3 1 4 1 9 2 3 5 2 5 / / t l a c _ a _ 0 0 3 1 4 P D . F B j G u e S T T O N 0 8 S e P e M B e R 2 0 2 3 Florence, Italien. Association for Computational Linguistics. Kaisheng Yao, Geoffrey Zweig, Mei-Yuh Hwang, Yangyang Shi, and Dong Yu. 2013. Recurrent neural networks for language understanding. In Interspeech, pages 2524–2528. Qi Zhu, Zheng Zhang, Yan Fang, Xiang Li, Ryuichi Takanobu, Jinchao Li, Baolin Peng, Jianfeng Gao, Xiaoyan Zhu, and Minlie Huang. 2020. Convlab-2: An open-source toolkit for building, evaluating, and diagnosing dialogue systems. arXiv preprint arXiv:2002.04793. l D O w N O A D e D F R O M H T T P : / / D ich R e C T . M ich T . e d u / t a c l / l A R T ich C e - P D F / d o i / . 1 0 1 1 6 2 / t l a c _ a _ 0 0 3 1 4 1 9 2 3 5 2 5 / / t l a c _ a _ 0 0 3 1 4 P D . F B j G u e S T T O N 0 8 S e P e M B e R 2 0 2 3 295CrossWOZ: A Large-Scale Chinese Cross-Domain image
CrossWOZ: A Large-Scale Chinese Cross-Domain image
CrossWOZ: A Large-Scale Chinese Cross-Domain image

PDF Herunterladen