ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

SLPCS 2016 - Workshop on Semantic Spaces at the Intersection of NLP, Physics and Cognitive Science

Date2016-06-11

Deadline2016-03-20

VenueGlasgow, UK - United Kingdom UK - United Kingdom

Keywords

Websitehttps://www.sites.google.com/site/semspworkshop

Topics/Call fo Papers

Since their introduction in the early 1970s, vector space models of meaning have evolved into a well-established area of research in Natural Language Processing (NLP). Their probabilistic nature and ability to exploit the abundance of large-scale resources such as the Web make them one of the most useful tools (arguably the most successful (Turney and Pantel, 2010) for modeling what we broadly call meaning in language. The geometry provided by the angular distance between the vectors has been widely used as a representative of the degree of similarity of meaning in NLP.
Another field in which vector space models play an important role is physics, and especially quantum theory. Though seemingly unrelated to language, intriguing connections have recently been uncovered. The categorical model of Coecke et al. (2010), inspired by quantum protocols, has provided a convincing ?theoretical and practical? account of compositionality in vector space models of NLP. The resulting setting has systematically extended the vector models from words to sentences, enabling them to reason about sentence meaning with the same tools as for word meaning. Frobenius algebras have enabled reasoning about meanings of functional words such as relative pronouns (Sadrzadeh et al., 2013), and have been used for modeling aspects of language such as intonation (Kartsaklis and Sadrzadeh, 2015). The CPM construction over the underlying category, has provided a setting where the traditional notion of a word vector is replaced with that of a density matrix, allowing for a more fine-grained model (Piedeleu et al., 2015; Balkir, 2014). All along the way, the diagrammatic calculus of categorical quantum mechanics has simplified the computations thereof, allowing for a depiction of the flow meaning within sentences, using similar methods as those used for quantum protocols such as teleportation.
The link between physics and natural language semantics via vector space models has not been restricted to the aspirations and tools provided by categorical quantum mechanics. Density matrices have been used for modelling grammar along side meaning (Blacoe et al., 2013), whereas Sordoni and Nie (2014) exploit similar means for information retrieval. Methods from quantum logic have been applied to model logical words in natural language (Widdows, 2003), to reason about the human mental lexicon in cognitive processes (Bruza et al., 2009), and to vectors of queries and documents in information retrieval (Van Rijsbergen, 2004).
There is also a long-standing history of vector space models in cognitive science. Theories of categorization such as those developed by Ashby and Gott (1988); Nosofsky (1986); Rosch and Mervis (1975) utilise notions of angular distance between vectors. Hampton (1987); Smith and Osherson (1984); Tversky (1977) encode meanings as feature vectors, and more recently Gärdenfors (2004) has developed a model of concepts in which conceptual spaces provide geometric structures, and information is represented by points, vectors and regions in vector spaces. The conceptual spaces model has been applied to language evolution (Steels et al., 2005), scientific theory change (Gärdenfors and Zenker, 2013), and models of musical creativity (Forth et al., 2010), amongst others, and has the potential to augment NLP models of meaning with representations that have been learned through interaction with the external world.
Exploiting the common ground provided by the concept of a vector space, the workshop aims to bring together researchers working at the intersection of NLP, cognitive science, and physics, offering to them an appropriate forum for presenting their uniquely motivated work and ideas. The interplay between these three disciplines will foster theoretically motivated approaches to understanding how meanings of words interact with each other in sentences and discourse, how diagrammatic reasoning depicts and simplifies this interaction, how language models are determined by input from the world, and how word and sentence meanings interact logically. Topics of interests include (but are not restricted to):
Reasoning in semantic spaces
Applications of quantum logic in natural language processing
Compositionality in semantic spaces and conceptual spaces
Links between conceptual spaces and natural language processing
Modeling functional words such as prepositions and relative pronouns in compositional distributional models of meaning
Diagrammatic reasoning for natural language processing

Last modified: 2015-12-27 21:00:56