What topic do you need documentation on?
Heterogeneous Supervised Topic Models
Heterogeneous Supervised Topic Models Dhanya Sridhar(cid:2) and Hal Daum´e III† and David Blei ♠ (cid:2)Universit´e de Montr´eal and Mila-Quebec AI Institute, Canada dhanya.sridhar@mila.quebec †University of Maryland and Microsoft Research, USA hal3@umd.edu ♠Columbia University, USA david.blei@columbia.edu Abstract Researchers in the social sciences are often in- terested in the relationship between text and an outcome of interest, where the goal is to both uncover latent patterns in
Fact Checking with Insufficient Evidence
Fact Checking with Insufficient Evidence Pepa Atanasova Jakob Grue Simonsen Christina Lioma Isabelle Augenstein Department of Computer Science, University of Copenhagen, Denmark {pepa, simonsen, c.lioma, augenstein}@di.ku.dk Abstract Automating the fact checking (FC) process relies on information obtained from external sources. In this work, we posit that it is crucial for FC models to make veracity predictions only when there is sufficient evidence and oth- erwise
True Few-Shot Learning with Prompts—A Real-World Perspective
True Few-Shot Learning with Prompts—A Real-World Perspective Timo Schick and Hinrich Sch ¨utze Center for Information and Language Processing (CIS), LMU Munich, Germany schickt@cis.lmu.de, inquiries@cislmu.org Abstract Prompt-based approaches excel at few-shot learning. Tuttavia, Perez et al. (2021) Rif- cently cast doubt on their performance as they had difficulty getting good results in a ‘‘true’’ few-shot setting in which prompts and hyperparameters cannot be tuned on
Data-to-text Generation with Variational Sequential Planning
Data-to-text Generation with Variational Sequential Planning Ratish Puduppully and Yao Fu and Mirella Lapata Institute for Language, Cognition and Computation School of Informatics, Università di Edimburgo 10 Crichton Street, Edinburgh EH8 9AB, UK r.puduppully@sms.ed.ac.uk yao.fu@ed.ac.uk mlap@inf.ed.ac.uk Abstract We consider the task of data-to-text genera- zione, which aims to create textual output from non-linguistic input. We focus on generating long-form text, questo è, documents with mul-
Uncertainty Estimation and Reduction of Pre-trained Models for
Uncertainty Estimation and Reduction of Pre-trained Models for Text Regression Yuxia Wang(cid:2) Daniel Beck(cid:2) Timothy Baldwin(cid:2) Karin Verspoor†(cid:2) (cid:2) The University of Melbourne, Melbourne, Victoria, Australia †RMIT University, Melbourne, Victoria, Australia yuxiaw@student.unimelb.edu.au d.beck@unimelb.edu.au tb@ldwin.net karin.verspoor@rmit.edu.au Abstract State-of-the-art classification and regression models are often not well calibrated, and cannot reliably provide uncertainty estimates, limiting their utility in safety-critical applica- tions such as clinical decision-making. While recent
Is My Model Using the Right Evidence? Systematic Probes for Examining
Is My Model Using the Right Evidence? Systematic Probes for Examining Evidence-Based Tabular Reasoning Vivek Gupta1, Riyaz A. Bhat2, Atreya Ghosal3, Manish Shrivastava3, Maneesh Singh2, Vivek Srikumar1 1University of Utah, USA, 2Verisk Inc., India, 3IIIT-Hyderabad, India {vgupta, svivek}@cs.utah.edu, {riyaz.bhat, maneesh.singh}@verisk.com, {atreyee.ghosal@research, m.shrivastava}@iiit.ac.in Abstract Neural models command state-of-the-art per- formance across NLP tasks, including ones involving ‘‘reasoning’’. Models claiming to reason about the evidence presented to
End-to-end Argument Mining with Cross-corpora Multi-task Learning
End-to-end Argument Mining with Cross-corpora Multi-task Learning Gaku Morio, Hiroaki Ozaki, Terufumi Morishita, and Kohsuke Yanai Research and Development Group Hitachi, Ltd. Kokubunji, Tokyo, Japan {gaku.morio.vn,hiroaki.ozaki.yu, terufumi.morishita.wp,kohsuke.yanai.cs}@hitachi.com Abstract Mining an argument structure from text is an important step for tasks such as argu- ment search and summarization. While studies on argument(ation) mining have proposed promising neural network models, they usu- ally suffer from a shortage
Document Summarization with Latent Queries
Document Summarization with Latent Queries Yumo Xu and Mirella Lapata Institute for Language, Cognition and Computation School of Informatics, Università di Edimburgo 10 Crichton Street, Edinburgh EH8 9AB, United Kingdom yumo.xu@ed.ac.uk mlap@inf.ed.ac.uk Abstract The availability of large-scale datasets has driven the development of neural models that create generic summaries for single or multiple documents. For query-focused summarization (QFS), labeled training data in the form of
Ultra-fine Entity Typing with Indirect Supervision from Natural
Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference Bangzheng Li(cid:2)†∗, Wenpeng Yin‡, and Muhao Chen(cid:2) (cid:2)University of Southern California, USA †University of Illinois at Urbana-Champaign, USA ‡Temple University, USA vincentleebang@gmail.com; wenpeng.yin@temple.edu; muhaoche@usc.edu Abstract The task of ultra-fine entity typing (UFET) seeks to predict diverse and free-form words or phrases that describe the appropriate types of entities mentioned in sentences. A key chal- lenge
It’s not Rocket Science:
It’s not Rocket Science: Interpreting Figurative Language in Narratives Tuhin Chakrabarty1∗ Yejin Choi2,3 Vered Shwartz4∗ 1Columbia University, USA 2Allen Institute for Artificial Intelligence, USA 3Paul G. Allen School of Computer Science & Engineering, University of Washington, USA 4University of British Columbia, Canada tuhin.chakr@cs.columbia.edu, yejinc@allenai.org, vshwartz@cs.ubc.ca Abstract Figurative language is ubiquitous in English. Yet, the vast majority of NLP research focuses on literal language. Existing text
Sentence Similarity Based on Contexts
Sentence Similarity Based on Contexts Xiaofei Sun(cid:2), Yuxian Meng♣, Xiang Ao(cid:3), Fei Wu(cid:2), Tianwei Zhang♥ Jiwei Li(cid:2)♣, and Chun Fan♠ , (cid:2)Zhejiang University, China, ♣Shannon.AI, China, (cid:3)Chinese Academy of Sciences, China, ♥ Nanyang Technological University, Singapore, ♠Computer Center, Peking University, China, ♠National Biomedical Imaging Center, Peking University, China, ♠Peng Cheng Laboratory, China {xiaofei sun,yuxian meng,jiwei li}@shannonai.com,aoxiang@ict.ac.cn wufei@zju.edu.cn,tianwei.zhang@ntu.edu.sg,fanchun@pku.edu.cn Abstract Existing methods to measure sentence similar- ità
Relational Memory-Augmented Language Models
Relational Memory-Augmented Language Models Qi Liu2∗, Dani Yogatama1, and Phil Blunsom1,2 1DeepMind, United Kingdom, 2University of Oxford, United Kingdom {qi.liu,phil.blunsom}@cs.ox.ac.uk dyogatama@deepmind.com Abstract We present a memory-augmented approach to condition an autoregressive language model on a knowledge graph. We represent the graph as a collection of relation triples and re- trieve relevant relations for a given context to improve text generation. Experiments on WikiText-103, WMT19, E
(cid:2) MuSiQue: Multihop Questions via Single-hop Question Composition
(cid:2) MuSiQue: Multihop Questions via Single-hop Question Composition Harsh Trivedi(cid:2) Niranjan Balasubramanian(cid:2) Tushar Khot† Ashish Sabharwal† (cid:2)Stony Brook University, Stony Brook, USA {hjtrivedi,niranjan}@cs.stonybrook.edu †Allen Institute for AI, Seattle, USA {tushark,ashishs}@allenai.org Abstract Multihop reasoning remains an elusive goal as existing multihop benchmarks are known to be largely solvable via shortcuts. Can we create a question answering (QA) dataset that, by construction, requires proper multihop rea- soning?
The FLORES-101 Evaluation Benchmark
The FLORES-101 Evaluation Benchmark for Low-Resource and Multilingual Machine Translation Naman Goyal1, Cynthia Gao1, Vishrav Chaudhary1, Peng-Jen Chen1, Guillaume Wenzek2, Da Ju1, Sanjana Krishnan1, Marc’Aurelio Ranzato1, Francisco Guzm´an1, Angela Fan2,3 1Facebook AI Research, USA, 2Facebook AI Research, France, 3LORIA flores@fb.com Abstract One of the biggest challenges hindering pro- gress in low-resource and multilingual ma- chine translation is the lack of good evaluation benchmarks. Current evaluation
Retrieve Fast, Rerank Smart:
Retrieve Fast, Rerank Smart: Cooperative and Joint Approaches for Improved Cross-Modal Retrieval Gregor Geigle∗1, Jonas Pfeiffer∗1, Nils Reimers1, Ivan Vuli´c2, Iryna Gurevych1 1Ubiquitous Knowledge Processing Lab, Technical University of Darmstadt, Germany 2Language Technology Lab, University of Cambridge, United Kingdom www.ukp.tu-darmstadt.de Abstract Current state-of-the-art approaches to cross- modal retrieval process text and visual input jointly, relying on Transformer-based archi- tectures with cross-attention mechanisms that attend over
A Neighborhood Framework for Resource-Lean Content Flagging
A Neighborhood Framework for Resource-Lean Content Flagging Sheikh Muhammad Sarwar2,5,∗ and Dimitrina Zlatkova1 and Momchil Hardalov1,6 and Yoan Dinkov1 and Isabelle Augenstein1,3 and Preslav Nakov1,4 1Checkstep, UK, 2University of Massachusetts, Amherst, 3University of Copenhagen, Denmark, 4Qatar Computing Research Institute, HBKU, Qatar 5Amazon.com, US 6Sofia University ‘‘St. Kliment Ohridski’’, Bulgaria smsarwar@amazon.com, {didi,momchil,yoan.dinkov,isabelle,preslav.nakov}@checkstep.com Abstract We propose a novel framework for cross- lingual content flagging with limited target-
TopiOCQA: Open-domain Conversational Question Answering
TopiOCQA: Open-domain Conversational Question Answering with Topic Switching Vaibhav Adlakha1,4 Shehzaad Dhuliawala2 Kaheer Suleman3 Harm de Vries4 Siva Reddy1,5 2ETH Z¨urich, Switzerland 3Microsoft Montr´eal, Canada 1Mila, McGill University, Canada 4ServiceNow Research, Canada 5Facebook CIFAR AI Chair, Canada {vaibhav.adlakha,siva.reddy}@mila.quebec Abstract In a conversational question answering sce- nario, a questioner seeks to extract information about a topic through a series of interdepen- dent questions and answers. As
Czech Grammar Error Correction with a Large and Diverse Corpus
Czech Grammar Error Correction with a Large and Diverse Corpus Jakub N´aplava† Milan Straka† Jana Strakov´a† Alexandr Rosen‡ †Charles University, Faculty of Mathematics and Physics Institute of Formal and Applied Linguistics, Czech Republic {naplava,straka,strakova}@ufal.mff.cuni.cz ‡Charles University, Faculty of Arts Institute of Theoretical and Computational Linguistics, Czech Republic alexandr.rosen@ff.cuni.cz Abstract We introduce a large and diverse Czech cor- pus annotated for grammatical error correction (GEC) con