ASDGML 2011 - ACL-HLT 2011 Workshop: Automatic Summarization for Different Genres, Media, and Languages
Topics/Call fo Papers
ACL-HLT 2011 Workshop: Automatic Summarization for Different Genres, Media, and Languages
CALL FOR PAPERS
Automatic summarization of written news has been an area of active research for over a decade now. A wide range of summarization approaches for news have been developed and tested on large reference datasets provided by the Document Understanding Conferences (DUC) and the current Text Analysis Conference (TAC), and both manual and automatic evaluation measures have been developed and validated for this genre.
Yet much of the information that users need to navigate and access in a timely fashion does not consist of news alone. Information is available from different sources, made available using different media, generated for different purposes, and written in different languages. This has motivated less traditional work on domain specific summarization of emails, blogs, forums, Twitter messages, scientific articles, as well as on other media, including speech recordings (broadcast news, voice mails, lectures, meetings) and multimodal data. There have also been several pilot evaluations on novel tasks on news, such as update summarization and opinion summarization in the TAC. Summarization activities have also been seen in different languages, including cross-lingual studies.
When researchers start investigating new domains, media, and languages, they typically first apply existing summarization techniques originally developed for news; however, there are many challenges that the system needs to overcome when applied to these new tasks:
-- Domain difference: when using speech recordings as input, for example, the system needs to handle appropriately transcription errors in speech recognition output, to deal with the fact that there are no explicitly marked paragraphs and sentences, the input contains disfluencies and ungrammatical utterances, there are multiple speakers, and off topic discussion. Blogs also pose challenges for summarization due to idiosyncracies in formatting and language.
-- Language differences: tools for text analysis are not available for all languages, so evaluating the degree to which a given approach is language independent or developing approaches for specific languages becomes important.
-- Data issues: for most of the novel genres and domains, standard test collections do not exist. Researchers sometimes report results on small proprietary datasets, so comparison of results and techniques becomes a problem and generalization of findings might not be possible.
-- Evaluation issues: proper evaluation techniques for non-news summarization have not been studied and at times existing measures have proved to be inadequate.
--Research questions abound: Are we better-off developing novel domain specific algorithms rather than applying generic ones for textual news summarization? or should we invest effort in developing preprocessing tools to make the input more like written text that the algorithms were original designed for? Should input in other languages be first translated to English? How to ensure the development of more widely accessibly test data, developed following mutually agreed guidelines? What new evaluation metrics are needed and what modifications of existing evaluation protocols are necessary?
This workshop aims to bring together researchers that have been working on traditional text summarization and the new domains to share their experience and establish a unified roadmap for future development of summarization research.
This workshop will solicit papers in a wide range of topics relevant to summarization research, including but not limited to:
-- addressing challenges and domain specific issues in summarization of different genres, such as email, blog, forum, tweets, books, research articles
-- summarization of speech data, such as broadcast news, lectures, voice mails, meetings
-- cross-genre analysis
--summarization using additional modality, such as gesture, video info
-- cross-lingual, findings between different languages
-- abstractive summarization
-- user tailored summarization, including query-focused, update, etc
-- evaluation issues for different genres and media
-- task-based (extrinsic) evaluation
-- resources for summarization research
-- integration of summarization with other tasks, such as question answering, information retrieval
Submissions should follow the ACL HLT 2011 length and formatting requirements, found at http://www.acl2011.org/call.shtml. The reviewing will be blind. Papers should not include the authors' names and affiliations. The maximum length for the papers is 8 pages (including references). Papers should be submitted as PDF documents to the following address: https://www.softconf.com/acl2011/summarization/
Important Dates
Apr 10 Paper due date
Apr 25 Notification of acceptance
May 06 Camera-ready deadline
CALL FOR PAPERS
Automatic summarization of written news has been an area of active research for over a decade now. A wide range of summarization approaches for news have been developed and tested on large reference datasets provided by the Document Understanding Conferences (DUC) and the current Text Analysis Conference (TAC), and both manual and automatic evaluation measures have been developed and validated for this genre.
Yet much of the information that users need to navigate and access in a timely fashion does not consist of news alone. Information is available from different sources, made available using different media, generated for different purposes, and written in different languages. This has motivated less traditional work on domain specific summarization of emails, blogs, forums, Twitter messages, scientific articles, as well as on other media, including speech recordings (broadcast news, voice mails, lectures, meetings) and multimodal data. There have also been several pilot evaluations on novel tasks on news, such as update summarization and opinion summarization in the TAC. Summarization activities have also been seen in different languages, including cross-lingual studies.
When researchers start investigating new domains, media, and languages, they typically first apply existing summarization techniques originally developed for news; however, there are many challenges that the system needs to overcome when applied to these new tasks:
-- Domain difference: when using speech recordings as input, for example, the system needs to handle appropriately transcription errors in speech recognition output, to deal with the fact that there are no explicitly marked paragraphs and sentences, the input contains disfluencies and ungrammatical utterances, there are multiple speakers, and off topic discussion. Blogs also pose challenges for summarization due to idiosyncracies in formatting and language.
-- Language differences: tools for text analysis are not available for all languages, so evaluating the degree to which a given approach is language independent or developing approaches for specific languages becomes important.
-- Data issues: for most of the novel genres and domains, standard test collections do not exist. Researchers sometimes report results on small proprietary datasets, so comparison of results and techniques becomes a problem and generalization of findings might not be possible.
-- Evaluation issues: proper evaluation techniques for non-news summarization have not been studied and at times existing measures have proved to be inadequate.
--Research questions abound: Are we better-off developing novel domain specific algorithms rather than applying generic ones for textual news summarization? or should we invest effort in developing preprocessing tools to make the input more like written text that the algorithms were original designed for? Should input in other languages be first translated to English? How to ensure the development of more widely accessibly test data, developed following mutually agreed guidelines? What new evaluation metrics are needed and what modifications of existing evaluation protocols are necessary?
This workshop aims to bring together researchers that have been working on traditional text summarization and the new domains to share their experience and establish a unified roadmap for future development of summarization research.
This workshop will solicit papers in a wide range of topics relevant to summarization research, including but not limited to:
-- addressing challenges and domain specific issues in summarization of different genres, such as email, blog, forum, tweets, books, research articles
-- summarization of speech data, such as broadcast news, lectures, voice mails, meetings
-- cross-genre analysis
--summarization using additional modality, such as gesture, video info
-- cross-lingual, findings between different languages
-- abstractive summarization
-- user tailored summarization, including query-focused, update, etc
-- evaluation issues for different genres and media
-- task-based (extrinsic) evaluation
-- resources for summarization research
-- integration of summarization with other tasks, such as question answering, information retrieval
Submissions should follow the ACL HLT 2011 length and formatting requirements, found at http://www.acl2011.org/call.shtml. The reviewing will be blind. Papers should not include the authors' names and affiliations. The maximum length for the papers is 8 pages (including references). Papers should be submitted as PDF documents to the following address: https://www.softconf.com/acl2011/summarization/
Important Dates
Apr 10 Paper due date
Apr 25 Notification of acceptance
May 06 Camera-ready deadline
Other CFPs
Last modified: 2011-04-06 15:15:52