Adaptive 2015 - Workshop on Adaptive Natural Language Processing
Topics/Call fo Papers
Natural Language Processing is becoming more and more ubiquitous as technologies become omnipresent in our daily life. This involves a huge effort to develop language-based interfaces, text analytics, search engines, writing assitants among other systems and their related tools and resources. Adaptation has been key to facilitate rapid development of language-based systems, with reuse of existing resources as alternatives to creating tools and resources from scratch. These approaches have benefited from the recent surge of complex Machine Learning approaches as applied to NLP tasks. This is the case for Semi-supervised and Unsupervised Learning, Active Learning, Domain Adaptation, and even Representation Learning and Deep Learning, which have had a very positive impact on NLP tasks and applications.
This workshop aims to provide a meeting point for researchers working on the portability of language resources and methodologies across languages and domains, with a special focus on exploiting available knowledge as a base to facilitate and enhance new developments. We understand that there is a common factor between tasks like porting a parser between related languages, adapting a dialogue system for a different domain, using rules inferred from an annotated corpus together with an unannotated corpus to port an information extraction system to another domain, or simplifying texts for different kinds of readers, among others. We believe that sharing insights on such approaches will be enriching and will contribute to a better understanding of the problems and solutions.
We expect that themes like Representation Learning, Deep Learning and Active Learning, and their successful applications to various areas of NLP, will raise interesting, intellectually challenging discussions.
Workshop Topics
Contributions may present results from completed as well as ongoing research, with an emphasis on novel approaches, methods, ideas, and perspectives.
The topics of the workshop include, but are not limited to:
Semi-supervised learning, Deep Learning, Active learning and Representation learning for domain adaptation, language portability and task portability
Adaptation and reuse of tools and resources for closely related languages
Adaptation of highly valuable resources, like treebanks, to languages not closely related
Linguistic issues in language portability: false friends, asymmetric discretization of semantic continuum
Learning from multiple domains
Development of multi-domain datasets
Evaluation paradigms for complex learning
Domain adaptation for specific applications: parsing, machine translation, information extraction, document classification, sentiment analysis and author attribution and profiling
This workshop aims to provide a meeting point for researchers working on the portability of language resources and methodologies across languages and domains, with a special focus on exploiting available knowledge as a base to facilitate and enhance new developments. We understand that there is a common factor between tasks like porting a parser between related languages, adapting a dialogue system for a different domain, using rules inferred from an annotated corpus together with an unannotated corpus to port an information extraction system to another domain, or simplifying texts for different kinds of readers, among others. We believe that sharing insights on such approaches will be enriching and will contribute to a better understanding of the problems and solutions.
We expect that themes like Representation Learning, Deep Learning and Active Learning, and their successful applications to various areas of NLP, will raise interesting, intellectually challenging discussions.
Workshop Topics
Contributions may present results from completed as well as ongoing research, with an emphasis on novel approaches, methods, ideas, and perspectives.
The topics of the workshop include, but are not limited to:
Semi-supervised learning, Deep Learning, Active learning and Representation learning for domain adaptation, language portability and task portability
Adaptation and reuse of tools and resources for closely related languages
Adaptation of highly valuable resources, like treebanks, to languages not closely related
Linguistic issues in language portability: false friends, asymmetric discretization of semantic continuum
Learning from multiple domains
Development of multi-domain datasets
Evaluation paradigms for complex learning
Domain adaptation for specific applications: parsing, machine translation, information extraction, document classification, sentiment analysis and author attribution and profiling
Other CFPs
- 11TH EUROPEAN DEPENDABLE COMPUTING CONFERENCE - DEPENDABILITY IN PRACTICE
- 7th International Workshop on Software Engineering for Resilient Systems
- 5th IEEE International Conference on Big Data and Cloud Computing (BDCloud 2015)
- 9th International Conference on Frontier of Computer Science and Technology (FCST 2015)
- 2015 International Conference on Intelligent Information and Data Science (IIDS 2015)
Last modified: 2015-01-20 23:26:18