ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

RL 2012 - ICML Workshop on Representation Learning

Date2012-06-26

Deadline2012-05-07

VenueEdinburgh, UK - United Kingdom UK - United Kingdom

Keywords

Websitehttps://icml.cc/2012/workshops/

Topics/Call fo Papers

In this workshop we consider the question of how we can learn meaningful and useful representations of the data. There has been a great deal of recent work on this topic, much of it emerging from researchers interested in training deep architectures. Deep learning methods such as deep belief networks, sparse coding-based methods, convolutional networks, and deep Boltzmann machines, have shown promise as a means of learning invariant representations of data and have already been successfully applied to a variety of tasks in computer vision, audio processing, natural language processing, information retrieval, and robotics. Bayesian nonparametric methods and other hierarchical graphical model-based approaches have also been recently shown the ability to learn rich representations of data.
By bringing together researchers with diverse expertise and perspectives but who are all interested in the question of how to learn data representations, we will explore the challenges and promising directions for future research in this area.
In the context of an opening overview talk and in a panel discussion (including our invited speakers), we will attempt to address some of the issues that have recently emerged as critical in shaping the future development of this line of research:
How do we learn invariant representations? Feature pooling is a popular and highly successful mean of achieving invariant features, but is there a tension between feature specificity and robustness to structured noise (movement in a direction of an irrelevant factor of variation)? Does it make sense to think in terms of a theory of invariant features?
What role does learning really play? There is some evidence that learning does not seem as important as previously believed. Rather, the process of feature extraction itself seems to play the most significant role in determining the success of the representation of the data. For example, there is evidence that the use of feedback in feature extraction could be very important.
How can several layers of latent variables be effectively learned? There has been lots of empirical work showing the importance of certain architectures and inference algorithms to learn representations that retain information of the input while extracting more and more abstract concepts. We would like to discuss what are the key modules of these hierarchical models and what inference methods are more suitable to discover useful representations of data. Also, we would like to investigate which inference algorithms are more effective and scalable in terms of number of data points and feature dimensionality.
The workshop will also invite paper submissions on the development of representation learning methods, deep learning algorithms, theoretical foundations, inference and optimization methods, semi-supervised and transfer learning, and applications of deep learning and unsupervised feature learning to real-world tasks. Papers will be presented mainly as poster presentations.

Last modified: 2012-03-24 09:04:23