MasakhaNER: Named Entity Recognition for African Languages David Ifeoluwa Adelani1∗, Jade Abbott2∗, Graham Neubig3, Daniel D’souza4∗, Julia Kreutzer5∗, Constantine Lignos6∗, Chester Palen-Michel6∗, Happy Buzaaba7∗, Shruti Rijhwani3, Sebastian Ruder8, Stephen Mayhew9, Israel Abebe Azime10∗, Shamsuddeen H….
Suchkategorieangehen
PAQ: 65 Million Probably-Asked Questions and
PAQ: 65 Million Probably-Asked Questions and What You Can Do With Them Patrick Lewis†‡ Yuxiang Wu‡ Linqing Liu‡ Pasquale Minervini‡ Heinrich K ¨uttler† Aleksandra Piktus† Pontus Stenetorp‡ Sebastian Riedel†‡ †Facebook AI Research ‡University College London,…
He Thinks He Knows Better than the Doctors:
He Thinks He Knows Better than the Doctors: BERT for Event Factuality Fails on Pragmatics Nanjiang Jiang Department of Linguistics The Ohio State University, USA jiang.1879@osu.edu Marie-Catherine de Marneffe Department of Linguistics The Ohio State…
Compressing Large-Scale Transformer-Based Models:
Compressing Large-Scale Transformer-Based Models: A Case Study on BERT Prakhar Ganesh1∗, Yao Chen1∗, Xin Lou1, Mohammad Ali Khan1, Yin Yang2, Hassan Sajjad3, Preslav Nakov3, Deming Chen4, Marianne Winslett4 1Advanced Digital Sciences Center, Singapore 2College of…
Provable Limitations of Acquiring Meaning from Ungrounded Form:
Provable Limitations of Acquiring Meaning from Ungrounded Form: What Will Future Language Models Understand? William Merrill∗ Yoav Goldberg∗ † Roy Schwartz‡ Noah A. Smith∗§ ∗Allen Institute for AI, United States †Bar Ilan University, Israel ‡Hebrew…
Narrative Question Answering with Cutting-Edge Open-Domain QA
Narrative Question Answering with Cutting-Edge Open-Domain QA Techniques: A Comprehensive Study Xiangyang Mou∗ Chenghao Yang∗ Mo Yu∗ Bingsheng Yao Xiaoxiao Guo Saloni Potdar Hui Su Rensselaer Polytechnic Institute & IBM, United States moux4@rpi.edu gflfof@gmail.com Abstract…
Measuring and Improving Consistency in Pretrained Language Models
Measuring and Improving Consistency in Pretrained Language Models Yanai Elazar1,2 Nora Kassner3 Shauli Ravfogel1,2 Abhilasha Ravichander4 Eduard Hovy4 Hinrich Sch ¨utze3 Yoav Goldberg1,2 1Computer Science Department, Bar Ilan University, Israel 2Allen Institute for Artificial Intelligence,…
Maintaining Common Ground in Dynamic Environments
Maintaining Common Ground in Dynamic Environments Takuma Udagawa1 and Akiko Aizawa1,2 The University of Tokyo, Tokio, Japan1 National Institute of Informatics, Tokio, Japan2 {takuma udagawa,aizawa}@nii.ac.jp Abstract Common grounding is the process of creat- ing and…
Multimodal Pretraining Unmasked: A Meta-Analysis and a Unified
Multimodal Pretraining Unmasked: A Meta-Analysis and a Unified Framework of Vision-and-Language BERTs Emanuele Bugliarello Ryan Cotterell Naoaki Okazaki Desmond Elliott University of Copenhagen University of Cambridge ETH Z¨urich Tokyo Institute of Technology emanuele@di.ku.dk, rcotterell@inf.ethz.ch, okazaki@c.titech.ac.jp,…
How Can We Know When Language Models Know?
How Can We Know When Language Models Know? On the Calibration of Language Models for Question Answering Zhengbao Jiang†, Jun Araki‡, Haibo Ding‡, Graham Neubig† †Languages Technologies Institute, Carnegie Mellon University, United States ‡Bosch Research,…