Inferning 2013 - Workshop on Inferning: Interactions between Inference and Learning
Topics/Call fo Papers
ICML 2013 Workshop on Inferning: Interactions between Inference and Learning
http://inferning.cs.umass.edu inferning2013-AT-gmail.com
Important Dates:
Submission Deadline: Mar 30th, 2013 (11:59pm PST)
Author Notification: April 21st, 2013
Workshop: June 20-21, 2013, Atlanta, GA
There are strong interactions between learning algorithms which estimate the parameters of a model from data, and inference algorithms which use a model to make predictions about data. Understanding the intricacies of these interactions is crucial for advancing the state-of-the-art on real-world tasks in natural language processing, computer vision, computation biology, etc. Yet, many facets of these interactions remain unknown. In this workshop, we study the interactions between inference and learning using two reciprocating perspectives.
Perspective one: how does inference affect learning? The first perspective studies the influence of the choice of inference technique during learning on the resulting model. When faced with models for which exact inference is intractable, efficient approximate inference techniques may be used, such as MCMC sampling, stochastic approximation, belief propagation, beam-search, dual decomposition, etc. The workshop will focus on work that evaluates the impact of the approximations on the resulting parameters, in terms of both the generalization of the model, the effect it has on the objective functions, and the convergence properties. We will also study approaches that attempt to correct for the approximations in inference by modifying the objective and/or the learning algorithm (for example, contrastive divergence for deep architectures), and approaches that minimize the dependence on the inference algorithms by exploring inference-free methods (e.g., piece-wise training, pseudo-max and decomposed learning).
Perspective two: how does learning affect inference? Traditionally, the goal of learning has been to find a model for which prediction (i.e., inference) accuracy is as high as possible. However, an increasing emphasis on modeling complexity has shifted the goal of learning: find models for which prediction (i.e., inference) is as efficient as possible. Thus, there has been recent interest in more unconventional approaches to learning that combine generalization accuracy with other desiderata such as faster inference. Some examples of this kind are: learning classifiers for greedy inference (e.g., Searn, Dagger); structured cascade models that learn a cost function to perform multiple runs of inference from coarse to fine level of abstraction by trading-off accuracy and efficiency at each level; learning cost function to search in the space of complete outputs (e.g., SampleRank, search in Limited Discrepancy Search space); learning structures that exhibit efficient exact inference etc. Similarly, there has been work that learns operators for efficient search-based inference, approaches that trade-off speed and accuracy by incorporating resource constraints such as run-time and memory into the learning objective.
This workshop brings together practitioners from different fields (information extraction, machine vision, natural language processing, computational biology, etc.) in order to study a unified framework for understanding and formalizing the interactions between learning and inference. The following is a partial list of relevant keywords for the workshop:
* learning with approximate inference
* cost-aware learning
* learning sparse structures
* pseudo-likelihood, composite likelihood training
* contrastive divergence
* piece-wise and decomposed training
* decomposed learning
* coarse to fine learning and inference
* score matching
* stochastic approximation
* incremental gradient methods
* adaptive proposal distributions
* learning for anytime inference
* learning approaches that trade-off speed and accuracy
* learning to speed up inference
* learning structures that exhibit efficient exact inference
* lifted inference for first-order models
* more ...
New benchmark problems: This line of research can hugely benefit from new challenge problems from various fields (e.g., computer vision, natural language processing, speech, computational biology, computational sustainability, etc.). Therefore, we especially request relevant papers describing such problems, main challenges, evaluations and public data sets.
Invited Speakers:
Dan Roth, University of Illinois, Urbana-Champaign
Rina Dechter, University of California, Irvine
Ben Taskar, University of Washington
Hal Daume, University of Maryland, College Park
Alan Fern, Oregon State University
Important Dates:
Submission Deadline: Mar 30th, 2013 (11:59pm PST)
Author Notification: April 21st, 2013
Workshop: June 20-21, 2013
Author Guidelines:
Submissions are encouraged as extended abstracts of ongoing research. The recommended page length is 4-6 pages. Additional supplementary content may be included, but may not be considered during the review process. Previously published or currently in submission papers are also encouraged (we will confirm with authors before publishing the papers online).
The format of the submissions should follow the ICML 2013 style, available here: http://icml.cc/2013/wp-content/uploads/2012/12/icm... However, since the review process is not double-blind, submissions need not be anonymized and author names may be included.
Submission site: https://www.easychair.org/conferences/?conf=infern...
Organizers:
Janardhan Rao (Jana) Doppa, Oregon State University
Pawan Kumar, Ecole Centrale Paris
Michael Wick, University of Massachusetts, Amherst
Sameer Singh, University of Massachusetts, Amherst
Ruslan Salakhutdinov, University of Toronto
http://inferning.cs.umass.edu inferning2013-AT-gmail.com
Important Dates:
Submission Deadline: Mar 30th, 2013 (11:59pm PST)
Author Notification: April 21st, 2013
Workshop: June 20-21, 2013, Atlanta, GA
There are strong interactions between learning algorithms which estimate the parameters of a model from data, and inference algorithms which use a model to make predictions about data. Understanding the intricacies of these interactions is crucial for advancing the state-of-the-art on real-world tasks in natural language processing, computer vision, computation biology, etc. Yet, many facets of these interactions remain unknown. In this workshop, we study the interactions between inference and learning using two reciprocating perspectives.
Perspective one: how does inference affect learning? The first perspective studies the influence of the choice of inference technique during learning on the resulting model. When faced with models for which exact inference is intractable, efficient approximate inference techniques may be used, such as MCMC sampling, stochastic approximation, belief propagation, beam-search, dual decomposition, etc. The workshop will focus on work that evaluates the impact of the approximations on the resulting parameters, in terms of both the generalization of the model, the effect it has on the objective functions, and the convergence properties. We will also study approaches that attempt to correct for the approximations in inference by modifying the objective and/or the learning algorithm (for example, contrastive divergence for deep architectures), and approaches that minimize the dependence on the inference algorithms by exploring inference-free methods (e.g., piece-wise training, pseudo-max and decomposed learning).
Perspective two: how does learning affect inference? Traditionally, the goal of learning has been to find a model for which prediction (i.e., inference) accuracy is as high as possible. However, an increasing emphasis on modeling complexity has shifted the goal of learning: find models for which prediction (i.e., inference) is as efficient as possible. Thus, there has been recent interest in more unconventional approaches to learning that combine generalization accuracy with other desiderata such as faster inference. Some examples of this kind are: learning classifiers for greedy inference (e.g., Searn, Dagger); structured cascade models that learn a cost function to perform multiple runs of inference from coarse to fine level of abstraction by trading-off accuracy and efficiency at each level; learning cost function to search in the space of complete outputs (e.g., SampleRank, search in Limited Discrepancy Search space); learning structures that exhibit efficient exact inference etc. Similarly, there has been work that learns operators for efficient search-based inference, approaches that trade-off speed and accuracy by incorporating resource constraints such as run-time and memory into the learning objective.
This workshop brings together practitioners from different fields (information extraction, machine vision, natural language processing, computational biology, etc.) in order to study a unified framework for understanding and formalizing the interactions between learning and inference. The following is a partial list of relevant keywords for the workshop:
* learning with approximate inference
* cost-aware learning
* learning sparse structures
* pseudo-likelihood, composite likelihood training
* contrastive divergence
* piece-wise and decomposed training
* decomposed learning
* coarse to fine learning and inference
* score matching
* stochastic approximation
* incremental gradient methods
* adaptive proposal distributions
* learning for anytime inference
* learning approaches that trade-off speed and accuracy
* learning to speed up inference
* learning structures that exhibit efficient exact inference
* lifted inference for first-order models
* more ...
New benchmark problems: This line of research can hugely benefit from new challenge problems from various fields (e.g., computer vision, natural language processing, speech, computational biology, computational sustainability, etc.). Therefore, we especially request relevant papers describing such problems, main challenges, evaluations and public data sets.
Invited Speakers:
Dan Roth, University of Illinois, Urbana-Champaign
Rina Dechter, University of California, Irvine
Ben Taskar, University of Washington
Hal Daume, University of Maryland, College Park
Alan Fern, Oregon State University
Important Dates:
Submission Deadline: Mar 30th, 2013 (11:59pm PST)
Author Notification: April 21st, 2013
Workshop: June 20-21, 2013
Author Guidelines:
Submissions are encouraged as extended abstracts of ongoing research. The recommended page length is 4-6 pages. Additional supplementary content may be included, but may not be considered during the review process. Previously published or currently in submission papers are also encouraged (we will confirm with authors before publishing the papers online).
The format of the submissions should follow the ICML 2013 style, available here: http://icml.cc/2013/wp-content/uploads/2012/12/icm... However, since the review process is not double-blind, submissions need not be anonymized and author names may be included.
Submission site: https://www.easychair.org/conferences/?conf=infern...
Organizers:
Janardhan Rao (Jana) Doppa, Oregon State University
Pawan Kumar, Ecole Centrale Paris
Michael Wick, University of Massachusetts, Amherst
Sameer Singh, University of Massachusetts, Amherst
Ruslan Salakhutdinov, University of Toronto
Other CFPs
- Workshop on Prediction with Sequential Models
- IEEE CONFERENCE ON OPEN SYSTEMS 2013 (ICOS 2013)
- International Conference on Mechanical Engineering
- Taylor & Francis Call for Chapters: Case Studies in Intelligent and Secure Computing ? Achievements and Trends (Two Edited Volumes)
- Educational Research International
Last modified: 2013-02-28 22:47:01