ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

nips 2010 - NIPS 2010 workshop on Transfer Learning Via Rich Generative Models

Date2010-12-11

Deadline2010-10-20

VenueWhistler, Canada Canada

Keywords

Website

Topics/Call fo Papers

NIPS 2010 workshop on Transfer Learning Via

Rich Generative Models.

Whistler, BC, Canada, December 11, 2010

http://www.mit.edu/~rsalakhu/workshop_nips2010/ind...

Important Dates:


Deadline for submissions: October 20, 2010
Notification of acceptance: October 27, 2010

Overview:


Intelligent systems must be capable of transferring previously-learned abstract knowledge to new concepts, given only a small or noisy set of examples. This transfer of higher order information to new learning tasks lies at the core of many problems in the fields of computer vision, cognitive science, machine learning, speech perception and natural language processing.

Over the last decade, there has been considerable progress in developing cross-task transfer (e.g., multi-task learning and semi-supervised learning) using both discriminative and generative approaches. However, many existing learning systems today can not cope with new tasks for which they have not been specifically trained. Even when applied to related tasks, trained systems often display unstable behavior.

More recently, researchers have begun developing new approaches to building rich generative models that are capable of extracting useful, high-level structured representations from high-dimensional input. The learned representations have been shown to give promising results for solving a multitude of novel learning tasks, even though these tasks may be unknown when the generative model is being trained.

Although there has been recent progress, existing computational models are still far from being able to represent, identify and learn the wide variety of possible patterns and structure in real-world data. The goal of this workshop is to catalyze the growing community of researchers working on learning rich generative models, assess the current state of the field, discuss key challenges, and identify future promising directions of investigation.

(More detailed background information is available at the workshop website, http://www.mit.edu/~rsalakhu/workshop_nips2010/ind...)

Submission Instructions:


We invite submission of extended abstracts to the workshop. Extended abstracts should be 2-4 pages and adhere to the NIPS style (http://nips.cc/PaperInformation/StyleFiles). Submissions should include the title, authors' names, institutions and email addresses and should be sent in PDF or PS file format by email to gentrans-nips2010-AT-cs.toronto.edu

Submissions will be reviewed by the organizing committee on the basis of relevance, significance, technical quality, and clarity. Selected submissions may be accepted either as an oral presentation or as a poster presentation: there will be a limited number of oral presentations.

We encourage submissions with a particular emphasis on:

1. Learning structured representations: How can machines extract invariant representations from a large supply of high-dimensional highly-structured unlabeled data? How can these representations be used to learn many different concepts (e.g., visual object categories) and expand on them without disrupting previously-learned concepts? How can these representations be used in multiple applications?

2. Transfer Learning: How can previously-learned representations help learning new tasks so that less labeled supervision is needed? How can this facilitate knowledge representation for transfer learning tasks?

3. One-shot learning: Can we develop rich generative models that are capable of efficiently leveraging background knowledge in order to learn novel categories based on a single or a few training example? Are there models suitable for deep transfer, or generalizing across domains, when presented with few examples?

4. Deep learning: Recently, there has been notable progress in learning deep probabilistic generative models, including Deep Belief Networks, Deep Boltzmann Machines, deep nonparametric Bayesian models, that contain many layers of hidden variables. Can these models be extended to transfer learning tasks as well as learning new concepts with only one or few examples? Can we use representations learned by the deep models as an input to more structured hierarchical Bayesian models?

5. Scalability and success in real-world applications: How well do existing transfer learning models scale to large-scale problems including problems in computer vision, natural language processing, and speech perception? How well do these algorithms perform when applied to modeling high-dimensional real-world distributions (e.g. the distribution of natural images)?

Last modified: 2010-10-08 09:51:48