MRQA 2018 - 2018 Workshop on Machine Reading for Question Answering
Date2018-07-19
Deadline2018-04-08
VenueMelbourne, VIC, Australia
Keywords
Websitehttps://mrqa2018.github.io
Topics/Call fo Papers
Machine Reading for Question Answering (MRQA) has become an important testbed for evaluating how well computer systems understand human language, as well as a crucial technology for industry applications such as search engines and dialog systems. The research community has recently created a multitude of large-scale datasets over text sources such as Wikipedia (WikiReading, SQuAD, WikiHop), news and other articles (CNN/Daily Mail, NewsQA, RACE), fictional stories (MCTest, CBT, NarrativeQA), and general web sources (MS MARCO, TriviaQA, SearchQA). These new datasets have in turn inspired an even wider array of new question answering systems.
This workshop will gather researchers to address and discuss important research topics surrounding MRQA, including:
Accuracy: How can we make MRQA systems more accurate?
Interpretability: How can systems provide rationales for their predictions?
Speed and Scalability: How can systems scale to consider larger contexts, from long documents to the whole web?
Robustness: How can systems generalize to other datasets and settings beyond the training distribution?
Dataset Creation: What are effective methods for building new MRQA datasets?
Dataset Analysis: What challenges do current MRQA datasets pose?
Error Analysis: What types of questions or documents are particularly challenging for existing systems?
This workshop will gather researchers to address and discuss important research topics surrounding MRQA, including:
Accuracy: How can we make MRQA systems more accurate?
Interpretability: How can systems provide rationales for their predictions?
Speed and Scalability: How can systems scale to consider larger contexts, from long documents to the whole web?
Robustness: How can systems generalize to other datasets and settings beyond the training distribution?
Dataset Creation: What are effective methods for building new MRQA datasets?
Dataset Analysis: What challenges do current MRQA datasets pose?
Error Analysis: What types of questions or documents are particularly challenging for existing systems?
Other CFPs
- 2nd Workshop on Neural Machine Translation and Generation (NMT2018)
- Workshop on Cognitive Aspects of Computational Language Learning and Processing
- Workshop on Relevance of Linguistic Structure in Neural Architectures for NLP
- 3rd Workshop on Representation Learning for NLP (RepL4NLP)
- 1st Workshop on Economics and Natural Language Processing
Last modified: 2018-04-08 21:35:43