ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

NLGSS 2015 - 2015 Summer School on Natural Language Generation, Summarisation, and Dialogue Systems

Date2015-07-20 - 2015-07-24

Deadline2015-05-15

VenueAberdeen, UK - United Kingdom UK - United Kingdom

Keywords

Websitehttps://nlgsummer.github.io

Topics/Call fo Papers

The objective of this summer school is to introduce participants to the concepts and research questions in natural language generation (NLG), summarisation and dialogue systems. Although these three areas produce natural language, their distinct communities seldom interact because each community relies on different methods and because the inputs to each kind of system are different. There is, however, considerable overlap in the kinds of problems that need to be considered, from selecting the right content to evaluating the systems. We believe that focusing on the similarities of the different areas can stimulate "cross-pollination" of research. For example, most summarisation techniques could benefit from deeper semantic processing as performed in dialogue systems. Similarly, many NLG systems could benefit from techniques used by dialogue systems to substantially improve the generated output.
The summer school will be primarily aimed at PhD students and early career researchers but more experienced will be admitted to the extent that space permits. The early bird registration will be £200 for students and £250 for others untill the 15th of May.
Course Summary
Introduction to NLG: Ehud Reiter (University of Aberdeen, ARRIA NLG)
The basic concepts of NLG will be introduced, including document planning, microplanning, and realisation. Also examples of real NLG systems will be presented, including both what they do and how they work.
NLG in detail
Content Determination: Chris Mellish (University of Aberdeen)
This talk will summarise existing approaches to content determination for NLG, as well as touching on the closely related topics of text ordering and structuring. It will discuss why content determination is hard and what sorts of (hand-crafted or learned) models can be used to inform it. It is often useful to regard content determination as a search problem, and we will take this approach in order to compare the different methods that have been used.
Micro-planning:Albert Gatt (University of Malta)
This session will be devoted to discussing different micro-planning tasks. In particular, we will review some of the “classic” sub-tasks that microplanners have often been designed to perform, notably: (a) lexicalisation, the task of choosing the right words or lemmas to express the contents of the message; (b) aggregation, the task of merging distinct representations into a single, more concise representation; (c) referring expression generation, the task of selecting the content (and, to some extent, the form) of referential noun phrases in text. Following a brief discussion of these “classic” sub-tasks, the session will then move on to a relatively under-studied problem in microplanning, which arises when the text being generated has “narrative” qualities ? that is, it recounts events which, in addition to being related to each other in the generated discourse, are also related to each other in time. In this case, new questions arise in relation to choices the microplanner has to make, notably where tense, aspect and temporal connectives are concerned. For the purposes of illustration, the discussion of these microplanning sub-tasks will be conducted with reference to a concrete family of NLG systems, developed in the BabyTalk Project (Portet et al, 2009; Gatt et al, 2009; Hunter et al, 2011, 2012). However, reference will also be made to recent statistical approaches to NLG, in particular, the use of machine-learning methods to learn models from data that provide the kernel to the solutions of these sub-tasks.
Surface Realisation:Albert Gatt (University of Malta)
In this session, we will first discuss the domain of realisation with reference to a number of different languages. The purpose of this is mainly to delineate the problem: different languages will make different demands on a realiser, and in some cases, syntactic choices will have repercussions for microplanning decisions. Next, we will look at an overview of different realisers, starting with some classic rule-based examples, such as RealPro and KPML (Bateman, 1997). These will be contrasted with recent approaches, such as HALOGEN (Langkilde-Geary and Knight, 2002) or OpenCCG (White et al, 2007), where the aim is to minimise the rule-based component while allowing syntactic choices to be made probabilistically, usually through the use of n-gram based language models. In each of these cases ? whether rule-based or statistical ? there are assumptions made about the nature of the input, usually motivated by some theory of syntax. Finally, we will distinguish realisers such as the above from “realisation engines”, which simply provide the tools to perform morphosyntactic realisation in a given language, without any commitment as to the nature of the input representation. As a specific case study, we will use SimpleNLG (Gatt and Reiter, 2009).
Introduction to Summarisation:Advaith Siddharthan (University of Aberdeen)
This tutorial will cover a range of text summarisation approaches described in the literature for extractive and abstractive summarisation, including models for sentence selection in extractive summarisation based on various statistical definitions of "topic", abstractive summarisation through aggregation and deletion, attempts at microplanning (e.g., generating referring expressions), the use of template based generation, and issues of text planning or sentence ordering.
Introduction to Dialogue Systems: Paul Piwek (Open University)
The module will start with a question: What is dialogue? We will survey studies on human-human dialogue in search for a tentative answer. We then compare and contrast human-human dialogue with human-machine dialogue. This will lead us to an examination of various dialogue systems - past and present. We will consider dialogue system architectures and approaches to dialogue management. The module concludes with a look at recent developments, from incremental processing in dialogue to models of non-cooperative dialogue.
Learning to generate: Concept-to-text generation using machine learning Yannis Konstas (University of Edinburgh)
The explosion of the world wide web in the recent years has generated data that are both in very large amounts, as well is in obscure or inaccessible formats to non-expert users (e.g., numeric, tabular forms, graphs, etc). Several successful generation systems have been produced in the past twenty years, that mostly rely on human-crafted rules or expert-driven grammars. While reliable and capable of generating high-quality output, such systems are usually difficult to exploit patterns in very large sets of data, as well as port to different domains, without significant human intervention. In this tutorial, we will explore systems in NLG that learn the well-known pipeline modules of content selection, microplanning and surface realisation, automatically from data. We will visit methods that model each step into a probabilistic model or other weighted function, and learn their parameters by optimising a text output-related objective metric (e.g., BLEU, METEOR scores). Generation is then viewed as a common search problem, which entails finding the best combination of parameters given the trained model and an input. We will also compare systems that optimise each module in isolation, as well as jointly.
Evaluation: Ehud Reiter (University of Aberdeen, ARRIA NLG)
Different ways to evaluate NLG systems will be discussed, including task-based, human- based, and metric-based; what can we learn from these different types of evaluation? We will also summarise practice in each of these types of evaluations, and present the design and outcomes of several real NLG evaluations.
Readability: Thomas François (Université catholique de Louvain)
The field of readability aims at automatically assessing the difficulty of texts for a given population. A readability model uses some of the linguistic characteristics of the texts and combines them with a statistical algorithm, traditionally linear regression. The first attempts date back to the 1920's and the field has since seen the development of famous formula such as Flesch's (1948) or Gunning's (1952). They have been widely used in the anglo-saxon world, for instance to control the difficulty of articles in mainstream newspapers. However, the limitations of readability models have been stressed as early as the end of the 70's (Kintsch and Vipond, 1979). Selzer (1981) even published a critical paper entitled “Readability is a four-letter word”. This eventually led to the investigation of new research avenues, relying on computational linguistics as well as machine learning techniques to improve traditional approaches (Collins-Thompson and Callan, 2005 ; Schwarm and Ostendorf, 2005). In spite of these recent advances, readability remains a challenging field that offers large room for improvement as well as a lot of opportunities for real applications.
In this readability module, we will first outline the main tendencies in the field, with a focus on recent work that applies NLP techniques to readability. We will also describe the usual methodology framework applied to design a readability model and discuss some of the choices available within this framework. A summary of the evaluation techniques used in the field will end the first part of the module. In the second part, we will discuss some of the main challenges and issues in the field as we see them, such as the challenge of collecting large datasets of difficulty-annotated texts to train modern statistical algorithms, the issue of cross-domain generalisation, or the adaptation of readability methods to lower levels (sentence or word). This module will be concluded with some perspectives for future research in the field.
Cognitive Modelling: the case of reference Kees van Deemter (University of Aberdeen)
This lecture will discuss the idea that NLG systems can be seen as computational models of human language production, focussing on the production of referring expressions as a case study on which a substantial amount of work in recent years has focussed. I will discuss different types of computational models, and the different goals that these models can have, such as (1) emulating an average speaker in a corpus, (2) explaining the way in which speakers differ across a corpus, and (3) generating language that is easy to understand by human readers or hearers (e.g. in practical applications). These computational models will be compared with more traditional models developed in mainstream psycholinguistics.
The New Science of Information Delivery: Robert Dale (ARRIA NLG)
Natural Language Generation is a means to an end. That end is the delivery of information, and the great thing about NLG is that it provides a way of automating the delivery of the right information to the right people at the right time in the right way. But to really do that well, we need to understand the task of information delivery, and we need to understand how a variety of scientific disciplines provide underpinnings for how we can do that task well. This talk aims to stand back and look at the bigger picture: what it would mean to have a science of information delivery, with NLG as a key player along with a number of other technologies.
Open Lab
Yaji Sripada (University of Aberdeen, ARRIA NLG)
The Open Lab sessions are your opportunity to test your learning at the summer school. You will build an NLG system end-to-end using the software modules we provide. You can work alone or as a team. The idea is not to push you into writing code round-the-clock to build loads of functionality. Instead you will be encouraged to focus on learning first-hand the design choices and trade-offs involved in building an NLG application. If you are new to computer programming but willing to learn, help will be available in these supervised lab sessions to get you started with your first program!
Poster/Demo Sesson
At this session the participants can showcase the applications that they have developed in the open lab sessions and receive feedback from the lecturers and other participants. Also, there is an opportunity for all the participants to bring along a poster of the work they are currently involved in. It will be a good networking event for all of us.
Evening lectures
Graeme Ritchie (University of Aberdeen) Creative Language Generation
Over the past two decades, the field of computational creativity has come into being, and grown considerably. Its aim is "to model, simulate or replicate creativity using a computer". To this end, most of the work focusses on building software which engages in activities commonly thought, within society, to be creative, such as visual art, concept generation, musical composition, etc. Many of these domains involve natural language (e.g. poetry, stories, jokes), and hence such work can be seen as a subarea, creative language generation. This talk will give a general introduction to the field of computational creativity, briefly review some of the language-based work, and discuss some of the methodological issues raised by such research.
Paul Piwek (Open University) TBC

Last modified: 2015-05-09 08:07:12