Inferning 2012 - ICML Workshop on Inferning: Interactions between Inference and Learning
Topics/Call fo Papers
This workshop will study the interactions between algorithms for learning a model, and inference algorithms that use the resulting model parameters. These interactions will be studied using two perspectives.
The first perspective studies the influence of the choice of inference technique during learning on the resulting model. When faced with models for which exact inference is intractable, there are multiple approximate inference techniques that can be used, such as MCMC sampling, belief propagation, beam-search, dual decomposition, etc. The workshop will focus on work that evaluates the impact of the approximations on the resulting parameters, in terms of both the generalization of the model, the effect it has on the objective functions, and the convergence properties. We will also study approaches that attempt to correct for the approximations in inference by modifying the objective and/or the learning algorithm (for example, contrastive divergence for deep architectures), and approaches that minimize the dependence on the inference algorithms by exploring inference-free methods (for example, piecewise training).
A second perspective from which we study these interactions is by considering learning objectives that result in efficient, accurate inference during test time. These unconventional approaches to learning combine generalization to unseen data with other desiderata such as facilitating faster inference. For example, work in structured cascades learns model for which greedy, efficient inference can be performed at test time while still maintaining accuracy guarantees. Similarly, there has been work that learns operators for efficient search-based inference. There has also been work that incorporates resource constraints on running time and memory into the learning objective.
The workshop attempts to bring together the practitioners of these approaches in an attempt to study a unified framework under which these interactions can be studied, understood, and formalized. The following is a partial list of relevant keywords for the workshop:
learning with approximate inference
cost-aware learning
learning sparse structures
pseudo-likelihood training
contrastive divergence
piecewise training
coarse to fine learning and inference
scoring matching
stochastic approximation
incremental gradient methods
and more ...
The first perspective studies the influence of the choice of inference technique during learning on the resulting model. When faced with models for which exact inference is intractable, there are multiple approximate inference techniques that can be used, such as MCMC sampling, belief propagation, beam-search, dual decomposition, etc. The workshop will focus on work that evaluates the impact of the approximations on the resulting parameters, in terms of both the generalization of the model, the effect it has on the objective functions, and the convergence properties. We will also study approaches that attempt to correct for the approximations in inference by modifying the objective and/or the learning algorithm (for example, contrastive divergence for deep architectures), and approaches that minimize the dependence on the inference algorithms by exploring inference-free methods (for example, piecewise training).
A second perspective from which we study these interactions is by considering learning objectives that result in efficient, accurate inference during test time. These unconventional approaches to learning combine generalization to unseen data with other desiderata such as facilitating faster inference. For example, work in structured cascades learns model for which greedy, efficient inference can be performed at test time while still maintaining accuracy guarantees. Similarly, there has been work that learns operators for efficient search-based inference. There has also been work that incorporates resource constraints on running time and memory into the learning objective.
The workshop attempts to bring together the practitioners of these approaches in an attempt to study a unified framework under which these interactions can be studied, understood, and formalized. The following is a partial list of relevant keywords for the workshop:
learning with approximate inference
cost-aware learning
learning sparse structures
pseudo-likelihood training
contrastive divergence
piecewise training
coarse to fine learning and inference
scoring matching
stochastic approximation
incremental gradient methods
and more ...
Other CFPs
Last modified: 2012-03-24 09:06:06