CVSC 2013 - International Workshop on Continuous Vector Space Models and their Compositionality
Topics/Call fo Papers
In most natural language processing applications the goal is to extract linguistic information from raw text or to transform linguistic observations into an alternative form, for instance from speech to text or one language to another. All of these applications often involve statistical models that rely heavily on a discrete representation of linguistic concepts, such as words and their POS tags, phrases and their syntactic categories or sentences, documents, etc. This includes any model parameterized with large probability tables or based on the extraction of multiple co-occurrence features (e.g. bigrams or tag/word pairs). Such a representation poorly models statistical structure not explicitly represented within the parameterization, but that might be very relevant from a morphological, syntactic and semantic standpoint.
This hinders the generalization power of the model and reduces its ability to adapt to other domains. Another consequence is that such statistical models can only model very restricted contexts of text, without suffering from the sparsity of data. For instance, even the Google n-gram corpus only includes grams up to length 5. Data sparsity is a well known and fundamental issue in statistical NLP, for which the existing remedies based on smoothing techniques can be insufficient.
Aims, Scope and Relevant Literature
In recent years, there has been a growing interest in algorithms that learn a continuous representation for words, phrases, or documents. For instance, one can see latent semantic analysis (Landauer and Dumais, 1997) and latent Dirichlet allocation (Blei et al. 2003) as a mapping of documents or words into a continuous lower dimensional topic-space.
Another example, continuous word vector-space models (Sahlgren 2006, Reisinger 2012, Turian et al., 2010, Huang et al., 2012) represent word meanings with vectors that capture semantic and syntactic information. These representations can be used to induce similarity measures by computing distances between the vectors, leading to many useful applications, such as information retrieval (Schuetze 1992, Manning et al., 2008), search
query expansions (Jones et al., 2006), document classification (Sebastiani, 2002) and question answering (Tellex et al., 2003).
On the fundamental task of language modeling, many hard clustering approaches have been proposed such as Brown clustering (Brown et al.,1992) or exchange clustering (Martin et al.,1998). These algorithms can provide desparsification and can be seen as examples of unsupervised pre-training. However, they have not been shown to consistently outperform models based on Kneser-Ney smoothed language models which have at their core discrete n-gram representations. On the contrary, one influential proposal that uses the idea of continuous vector spaces for language modeling is that of neural language models (Bengio et al., 2003, Mikolov 2012). In these approaches, n-gram probabilities are estimated using a continuous representation of words in lieu of standard discrete representations, using a neural network that performs both the projection and the probability estimate. They report state of the art performance on several well studied language modeling datasets.
Other neural network based models that use continuous vector representations achieve state of the art performance in speech recognition applications (Schwenk, 2007, Dahl et al. 2011), multitask learning, NER and POS tagging (Collobert et al., 2011) or sentiment analysis (Socher et al. 2011). Moreover, in (Le et al., 2012), a continuous space translation model was introduced and its use in a large scale machine translation system yielded promising results in the last WMT evaluation.
Despite the success of single word vector space models, they are severely limited since they do not capture compositionality, the important quality of natural language that allows speakers to determine the meaning of a longer expression based on the meanings of its words and the rules used to combine them (Frege, 1892). This prevents them from gaining a deeper understanding of the semantics of longer phrases or sentences. Recently, there has been much progress in capturing compositionality in vector spaces, e.g., (Pado and Lapata 2007; Erk and Pado 2008; Mitchell and Lapata, 2010; Baroni and Zamparelli, 2010; Zanzotto et al., 2010; Yessenalina and Cardie, 2011; Grefenstette and Sadrzadeh 2011).
The work of Socher et al. 2012 compares several of these approaches on supervised tasks and for phrases of arbitrary type and length.
Another different trend of research on continuous vector space models belongs to the family of spectral methods. The motivation in that context is that working in a continuous space allows for the design of algorithms that are not plagued with the local minima issues that discrete latent space models (e.g. HMM trained with EM) tend to suffer from (Hsu et al. 2008). In fact, this motivation strikes with the conventional justification behind vector space models from the neural network literature, which are usually motivated as a way of tackling data sparsity issues. This apparent dichotomy is interesting and has not been investigated yet. Finally, spectral methods have recently been developed for word representation learning (Dhillon et al. 2011), dependency parsing (Dhillon et al. 2012) and probabilistic context-free grammars (Cohen et al. 2012).
In this workshop, we will bring together researchers who are interested in how to learn continuous vector space models, their compositionality and how to use this new kind of representation in NLP applications. The goal is to review the recent progress and propositions, to discuss the challenges, to identify promising future research directions and the next challenges for the NLP community.
This hinders the generalization power of the model and reduces its ability to adapt to other domains. Another consequence is that such statistical models can only model very restricted contexts of text, without suffering from the sparsity of data. For instance, even the Google n-gram corpus only includes grams up to length 5. Data sparsity is a well known and fundamental issue in statistical NLP, for which the existing remedies based on smoothing techniques can be insufficient.
Aims, Scope and Relevant Literature
In recent years, there has been a growing interest in algorithms that learn a continuous representation for words, phrases, or documents. For instance, one can see latent semantic analysis (Landauer and Dumais, 1997) and latent Dirichlet allocation (Blei et al. 2003) as a mapping of documents or words into a continuous lower dimensional topic-space.
Another example, continuous word vector-space models (Sahlgren 2006, Reisinger 2012, Turian et al., 2010, Huang et al., 2012) represent word meanings with vectors that capture semantic and syntactic information. These representations can be used to induce similarity measures by computing distances between the vectors, leading to many useful applications, such as information retrieval (Schuetze 1992, Manning et al., 2008), search
query expansions (Jones et al., 2006), document classification (Sebastiani, 2002) and question answering (Tellex et al., 2003).
On the fundamental task of language modeling, many hard clustering approaches have been proposed such as Brown clustering (Brown et al.,1992) or exchange clustering (Martin et al.,1998). These algorithms can provide desparsification and can be seen as examples of unsupervised pre-training. However, they have not been shown to consistently outperform models based on Kneser-Ney smoothed language models which have at their core discrete n-gram representations. On the contrary, one influential proposal that uses the idea of continuous vector spaces for language modeling is that of neural language models (Bengio et al., 2003, Mikolov 2012). In these approaches, n-gram probabilities are estimated using a continuous representation of words in lieu of standard discrete representations, using a neural network that performs both the projection and the probability estimate. They report state of the art performance on several well studied language modeling datasets.
Other neural network based models that use continuous vector representations achieve state of the art performance in speech recognition applications (Schwenk, 2007, Dahl et al. 2011), multitask learning, NER and POS tagging (Collobert et al., 2011) or sentiment analysis (Socher et al. 2011). Moreover, in (Le et al., 2012), a continuous space translation model was introduced and its use in a large scale machine translation system yielded promising results in the last WMT evaluation.
Despite the success of single word vector space models, they are severely limited since they do not capture compositionality, the important quality of natural language that allows speakers to determine the meaning of a longer expression based on the meanings of its words and the rules used to combine them (Frege, 1892). This prevents them from gaining a deeper understanding of the semantics of longer phrases or sentences. Recently, there has been much progress in capturing compositionality in vector spaces, e.g., (Pado and Lapata 2007; Erk and Pado 2008; Mitchell and Lapata, 2010; Baroni and Zamparelli, 2010; Zanzotto et al., 2010; Yessenalina and Cardie, 2011; Grefenstette and Sadrzadeh 2011).
The work of Socher et al. 2012 compares several of these approaches on supervised tasks and for phrases of arbitrary type and length.
Another different trend of research on continuous vector space models belongs to the family of spectral methods. The motivation in that context is that working in a continuous space allows for the design of algorithms that are not plagued with the local minima issues that discrete latent space models (e.g. HMM trained with EM) tend to suffer from (Hsu et al. 2008). In fact, this motivation strikes with the conventional justification behind vector space models from the neural network literature, which are usually motivated as a way of tackling data sparsity issues. This apparent dichotomy is interesting and has not been investigated yet. Finally, spectral methods have recently been developed for word representation learning (Dhillon et al. 2011), dependency parsing (Dhillon et al. 2012) and probabilistic context-free grammars (Cohen et al. 2012).
In this workshop, we will bring together researchers who are interested in how to learn continuous vector space models, their compositionality and how to use this new kind of representation in NLP applications. The goal is to review the recent progress and propositions, to discuss the challenges, to identify promising future research directions and the next challenges for the NLP community.
Other CFPs
Last modified: 2013-02-23 20:51:31