DeepLo 2018 - The 1st Workshop on Deep Learning Approaches for Low- Resource Natural Language Processing
Topics/Call fo Papers
Natural Language Processing is being revolutionized by deep learning with neural networks. However, deep learning requires large amounts of annotated data, and its advantage over traditional statistical methods typically diminishes when such data is not available; for example, SMT continues to outperform NMT in many bilingually resource-poor scenarios. Large amounts of annotated data do not exist for many low-resource languages, and for high-resource languages it can be difficult to find linguistically annotated data of sufficient size and quality to allow neural methods to excel. This workshop aims to bring together researchers from the NLP and ML communities who work on learning with neural methods when there is not enough data for those methods to succeed out-of-the-box. Techniques may include self-training, paired training, distant supervision, semi-supervised and transfer learning, and human-in-the-loop techniques such as active learning.
One class of approaches tackles the scarcity of fully annotated data by using weakly-annotated data. In general, weakly-annotated data is labeled data that has some deviation from the appropriate data for supervised training. Distant supervision for relation extraction is a good example of such methods, in which Freebase is used to train the model. A source of abundant weakly-annotated data for some NLP applications is user feedback, which is typically partial, discrete, and noisy. Learning from such data is very challenging and also useful for the NLP community. Bandit learning and counterfactual learning from Bandit feedback aim to leverage online and offline user feedback to improve model training. An alternative to online user feedback is to drastically reduce the amount of annotated data but increase its quality. Active learning algorithms incrementally select examples for labeling, with the aim of minimizing the amount of labeled data required to reach a given level of performance.
Another class of approaches improves the performance on the main task using data from related tasks. For example, one may not have access to a large amount of bilingual parallel data to train neural MT for a given language pair, but may have access to monolingual data in the source and target language, bilingual parallel data for other language pairs, or annotated data for other tasks such as named entity recognition or parsing. Techniques to leverage these additional data resources include transfer learning and multi-task learning, where we care about learning for the target task or all tasks, respectively. This class of approaches also includes domain adaptation, where the task is fixed but annotated training data is drawn from a domain that differs from the test domain. Generative adversarial networks (GANs) can be used to learn representations that are robust to domain shift. Another general approach in deep learning to tackle the data sparseness is the data synthesis. In NMT, back-translation method can be thought of as an example of synthesizing bilingual training data. While data synthesis might be laborious with unforeseeable impacts in other NLP tasks, there is a high potential to obtain significant performance gain once the model is well trained. Lastly, semi-supervised learning methods deal with learning from annotated and unannotated data, by creating better representations for text or by regularizing model parameters.
Topics of Interest
Active learning
Transfer learning
Multi-task learning
Semi-supervised learning
Dual learning
Unsupervised learning
Generative adversarial networks
Bandit learning
Domain adaptation
Decipherment or zero-shot learning
Language projections
Universal representations and interlinguas
Low resource structured prediction
Data synthesis
One class of approaches tackles the scarcity of fully annotated data by using weakly-annotated data. In general, weakly-annotated data is labeled data that has some deviation from the appropriate data for supervised training. Distant supervision for relation extraction is a good example of such methods, in which Freebase is used to train the model. A source of abundant weakly-annotated data for some NLP applications is user feedback, which is typically partial, discrete, and noisy. Learning from such data is very challenging and also useful for the NLP community. Bandit learning and counterfactual learning from Bandit feedback aim to leverage online and offline user feedback to improve model training. An alternative to online user feedback is to drastically reduce the amount of annotated data but increase its quality. Active learning algorithms incrementally select examples for labeling, with the aim of minimizing the amount of labeled data required to reach a given level of performance.
Another class of approaches improves the performance on the main task using data from related tasks. For example, one may not have access to a large amount of bilingual parallel data to train neural MT for a given language pair, but may have access to monolingual data in the source and target language, bilingual parallel data for other language pairs, or annotated data for other tasks such as named entity recognition or parsing. Techniques to leverage these additional data resources include transfer learning and multi-task learning, where we care about learning for the target task or all tasks, respectively. This class of approaches also includes domain adaptation, where the task is fixed but annotated training data is drawn from a domain that differs from the test domain. Generative adversarial networks (GANs) can be used to learn representations that are robust to domain shift. Another general approach in deep learning to tackle the data sparseness is the data synthesis. In NMT, back-translation method can be thought of as an example of synthesizing bilingual training data. While data synthesis might be laborious with unforeseeable impacts in other NLP tasks, there is a high potential to obtain significant performance gain once the model is well trained. Lastly, semi-supervised learning methods deal with learning from annotated and unannotated data, by creating better representations for text or by regularizing model parameters.
Topics of Interest
Active learning
Transfer learning
Multi-task learning
Semi-supervised learning
Dual learning
Unsupervised learning
Generative adversarial networks
Bandit learning
Domain adaptation
Decipherment or zero-shot learning
Language projections
Universal representations and interlinguas
Low resource structured prediction
Data synthesis
Other CFPs
- 2018 IEEE Latin-American Conference on Communications
- 2018 X Latin American Networking Conference
- 2018 7th International Conference on Business, Management and Governance (ICBMG 2018)
- 2018 5th International Conference on Biomedical and Bioinformatics Engineering (ICBBE 2018)--Ei Compendex and Scopus
- 2018 3rd International Conference on Nutrition and Food Engineering (ICNFE 2018)
Last modified: 2018-04-08 21:26:56