UCNLG 2011 - UCNLG+Eval ? The 4th UCNLG Workshop: Language Generation and Evaluation Anja Belz, Roger Evans, Albert Gatt, Kristina Striegnitz
Topics/Call fo Papers
There are many branches of NLP research that involve the generation of language (summarisation, MT, human-computer dialogue, application front-ends, data-to-text generation, document authoring, etc.). However, it is not always easy to identify common ground among the generation components of these application areas, which has sometimes made it difficult for generic research in 'Natural Language Generation' (NLG) to engage with them effectively. Increasingly common corpus-based approaches across these areas, and in particular in NLG itself, offer a new perspective on this situation and the opportunity to explore synergies and differences from the common grounding of corpus data.
This workshop is the fourth in an occasional series seeking to provide a forum for discussing NLG and its links with these closely related fields from a corpus-oriented perspective. The workshops have the general aims
to provide a forum for reporting and discussing corpus-oriented methods for generating language;
to foster cross-fertilisation between NLG and other fields where language is automatically generated; and
to promote the sharing of data and methods for the purpose of system building and comparative evaluation in all language generation research.
Each of these workshops has a special theme: at the first workshop (at Corpus Linguistics in 2005) it was use of corpora in NLG; at the second (UCNLG+MT at MT Summit XI in 2007) it was Language Generation and Machine Translation; at the third it was Language Generation and Summarisation (UCNLG+Sum at ACL-IJCNLP'09). The special theme of the 2011 workshop is Language Generation and Evaluation, and the event will showcase the latest developments in methods for evaluating computationally generated language across NLP, continue the discussion of future directions and host an invited talk on shared-task evaluation campaigns.
Evaluation Special Theme
The past five years have seen big changes in NLG evaluation. The field has moved from a situation where there were no comparative evaluation results for independently developed alternative approaches to the present increasingly rich diversity of data sets, methods and results for comparative evaluation (intrinsic and extrinsic, human-assessed and automatically computed). A distinctive and critical feature of these developments has been the community-led approach to the establishment of tasks, datasets and evaluation methods. The aim of the special evaluation theme at UCNLG+Eval is to provide a forum for reporting cutting-edge research on evaluation, taking stock of recent developments, discussing and comparing alternative approaches to evaluation and exploring possible directions for future development.
Call for papers
The UCNLG+Eval Workshop organisers invite submissions addressing the special theme of evaluating computationally generated text as well as submissions on all aspects of using corpora in the generation of language. Specific topics include, but are not limited to:
Statistical and machine learning approaches to language generation
Development and annotation of corpora for language generation research
Reuse of corpus resources developed for NLU (e.g. treebanks) in language generation research
Domain-specific vs. general-purpose corpora for language generation research
Evaluation of automatically generated language
Meta-evaluation of evaluation methods for language generation
Uses of corpora in the evaluation of automatically generated language
Proposals for new shared tasks in language generation
Note that by 'language generation research' we mean any field in which language is automatically generated including research commonly coming under the headings of NLG, MT, document summarisation and human-computer dialogue.
For full submission details see the separate call for papers.
Organisation
Organisers:
Anja Belz, University of Brighton, UK
Roger Evans, University of Brighton, UK
Albert Gatt, University of Malta, Malta
Kristina Striegnitz, Union College, USA
Programme committee:
Aoife Cahill, Stuttgart University, Germany
Charlie Greenbacker, University of Delaware, USA
Emiel Krahmer, Tilburg University, NL
Mirella Lapata, University of Edinburgh, UK
Oliver Lemon, Heriot-Watt University, Edinburgh, UK
Daniel Marcu, ISI, University of Southern California, USA
Kathy McKeown, Columbia, USA
Karolina Owczarzak, NIST, USA
Ehud Reiter, Aberdeen, UK
Important dates
22 Apr 2011: Deadline for paper submissions
20 May 2011: Notification of acceptance
03 Jun 2011: Camera-ready copies due
31 Jul 2011: UCNLG+Eval workshop in Edinburgh
This workshop is the fourth in an occasional series seeking to provide a forum for discussing NLG and its links with these closely related fields from a corpus-oriented perspective. The workshops have the general aims
to provide a forum for reporting and discussing corpus-oriented methods for generating language;
to foster cross-fertilisation between NLG and other fields where language is automatically generated; and
to promote the sharing of data and methods for the purpose of system building and comparative evaluation in all language generation research.
Each of these workshops has a special theme: at the first workshop (at Corpus Linguistics in 2005) it was use of corpora in NLG; at the second (UCNLG+MT at MT Summit XI in 2007) it was Language Generation and Machine Translation; at the third it was Language Generation and Summarisation (UCNLG+Sum at ACL-IJCNLP'09). The special theme of the 2011 workshop is Language Generation and Evaluation, and the event will showcase the latest developments in methods for evaluating computationally generated language across NLP, continue the discussion of future directions and host an invited talk on shared-task evaluation campaigns.
Evaluation Special Theme
The past five years have seen big changes in NLG evaluation. The field has moved from a situation where there were no comparative evaluation results for independently developed alternative approaches to the present increasingly rich diversity of data sets, methods and results for comparative evaluation (intrinsic and extrinsic, human-assessed and automatically computed). A distinctive and critical feature of these developments has been the community-led approach to the establishment of tasks, datasets and evaluation methods. The aim of the special evaluation theme at UCNLG+Eval is to provide a forum for reporting cutting-edge research on evaluation, taking stock of recent developments, discussing and comparing alternative approaches to evaluation and exploring possible directions for future development.
Call for papers
The UCNLG+Eval Workshop organisers invite submissions addressing the special theme of evaluating computationally generated text as well as submissions on all aspects of using corpora in the generation of language. Specific topics include, but are not limited to:
Statistical and machine learning approaches to language generation
Development and annotation of corpora for language generation research
Reuse of corpus resources developed for NLU (e.g. treebanks) in language generation research
Domain-specific vs. general-purpose corpora for language generation research
Evaluation of automatically generated language
Meta-evaluation of evaluation methods for language generation
Uses of corpora in the evaluation of automatically generated language
Proposals for new shared tasks in language generation
Note that by 'language generation research' we mean any field in which language is automatically generated including research commonly coming under the headings of NLG, MT, document summarisation and human-computer dialogue.
For full submission details see the separate call for papers.
Organisation
Organisers:
Anja Belz, University of Brighton, UK
Roger Evans, University of Brighton, UK
Albert Gatt, University of Malta, Malta
Kristina Striegnitz, Union College, USA
Programme committee:
Aoife Cahill, Stuttgart University, Germany
Charlie Greenbacker, University of Delaware, USA
Emiel Krahmer, Tilburg University, NL
Mirella Lapata, University of Edinburgh, UK
Oliver Lemon, Heriot-Watt University, Edinburgh, UK
Daniel Marcu, ISI, University of Southern California, USA
Kathy McKeown, Columbia, USA
Karolina Owczarzak, NIST, USA
Ehud Reiter, Aberdeen, UK
Important dates
22 Apr 2011: Deadline for paper submissions
20 May 2011: Notification of acceptance
03 Jun 2011: Camera-ready copies due
31 Jul 2011: UCNLG+Eval workshop in Edinburgh
Other CFPs
- Dialects-2011 ? First Workshop on Algorithms and Resources for Modelling of Dialects and Language Varieties Jeremy Jancsary, Friedrich Neubarth, Harald Trost
- 2012 Conference on Empirical Methods in Natural Language Processing
- Seventh Workshop on Statistical Machine Translation (WMT12)
- IS&T/SPIE Electronic Imaging 2012
- 2011 International Conference on Energy, Environment and Sustainable Development (EESD 2011)
Last modified: 2011-04-11 14:23:25