ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

MVRL 2016 - 2016 Workshop on Multi-View Representation Learning (MVRL 2016)

Date2016-06-23

Deadline2016-05-01

VenueNew York City, NY, USA - United States USA - United States

Keywords

Websitehttps://ttic.uchicago.edu/~wwang5/ICML2016_MVRL

Topics/Call fo Papers

ICML 2016 Workshop on Multi-View Representation Learning (MVRL 2016)
June 23rd, 2016
*Workshop webpage:* http://ttic.uchicago.edu/~wwang5/ICML2016_MVRL/
*Submission deadline:* May 1st, 2016
*Submission website:* https://easychair.org/conferences/?conf=mvrl2016
---
1. Call for Papers
We invite researchers to submit their recent work on algorithms, analysis, and applications of multi-view representation learning. A submission should take the form of an extended abstract of *2 pages* in PDF format using the ICML style. Author names do not need to be anonymized, and references may extend as far as needed beyond the 2 page upper limit. We welcome submissions with either results that have not been published previously or a summary of the authors' previous work that has been recently published or is under review in another conference or journal. In the interest of spurring discussion, we also encourage authors to submit work in progress with only preliminary results.
Submissions will be accepted as contributed talks or poster presentations. Extended abstracts should be submitted by May 1st; see website for submission details. Final versions will be posted on the workshop website (and are archival but do not constitute a proceedings).
2. Workshop Abstract
Multi-view data are becoming increasingly available in machine learning and its applications. Such data may consist of multi-modal measurements of an underlying signal, such as audio+video, audio+articulation, video+fMRI, image+text, webpage+click-through data, and text in different languages; or may consist of synthetic views of the same measurements, such as different time steps of a time sequence, word+context words, or different parts of a parse tree. The different views often contain complementary information, and multi-view learning methods can take advantage of this information to learn representations/features that are useful for understanding the structure of the data and that are beneficial for downstream tasks.
There have been increasing research activities in this direction, including exploration of different objectives (e.g., latent variable models, information bottleneck, contrastive losses, correlation-based objectives, multi-view auto-encoders, and deep restricted Boltzmann machines), deep learning models, the learning/inference problems that come with these models, and theoretical understanding of the methods.
The purpose of this workshop is to bring together researchers and practitioners in this area to share their latest results, to express their opinions, and to stimulate future research directions. We expect the workshop to help consolidate the understanding of various approaches proposed by different research groups, to help practitioners find the most appropriate tools for their applications, and to promote better understanding of the challenges in specific applications.
Possible topics include but are not limited to
- New objectives for multi-view representation learning
- Theoretical understanding of/connections between different methods
- Deep learning architectures for multi-view data
- Learning/inference in multi-view models
- Multi-view representation learning with structured inputs/outputs
- Dimensionality reduction/visualization for multi-view data
- Emerging applications of multi-view representation learning
3. Key Dates
Paper submission: 1 May 2016
Acceptance notification: 10 May 2016
Workshop: 23 June 2016
4. Confirmed Speakers
Chris Dyer (Carnegie Mellon University)
Sham Kakade (Universify of Washington)
Honglak Lee (University of Michigan)
Ruslan Salakhutdinov (Carnegie Mellon University)
5. Workshop Organizers
Xiaodong He (Microsoft Research)
Karen Livescu (TTI-Chicago)
Weiran Wang (TTI-Chicago)
Scott Wen-tau Yih (Microsoft Research)

Last modified: 2016-04-08 23:34:07