SCLeM 2018 - Subword and Character LEvel Models in NLP (SCLeM)
Topics/Call fo Papers
Invited Speakers
Jacob Eisenstein, Georgia Tech
Graham Neubig, CMU
Barbara Plank, University of Groningen
Brian Roark, Google
Topics
tokenization-free models
character-level machine translation
character-ngram information retrieval
transfer learning for character-level models
models of within-token and cross-token structure
NL generation (of words not seen in training etc)
out of vocabulary words
morphology & segmentation
relationship b/w morphology & character-level models
stemming and lemmatization
inflection generation
orthographic productivity
form-meaning representations
true end-to-end learning
spelling correction
efficient and scalable character-level models
Jacob Eisenstein, Georgia Tech
Graham Neubig, CMU
Barbara Plank, University of Groningen
Brian Roark, Google
Topics
tokenization-free models
character-level machine translation
character-ngram information retrieval
transfer learning for character-level models
models of within-token and cross-token structure
NL generation (of words not seen in training etc)
out of vocabulary words
morphology & segmentation
relationship b/w morphology & character-level models
stemming and lemmatization
inflection generation
orthographic productivity
form-meaning representations
true end-to-end learning
spelling correction
efficient and scalable character-level models
Other CFPs
Last modified: 2018-01-12 06:50:47