LTPM 2014 - Learning Tractable Probabilistic Models Workshop
Topics/Call fo Papers
Learning Tractable Probabilistic Models
Workshop -AT- ICML 2014
Beijing, China
June 25th - 26th, 2014
http://ltpm2014.cs.washington.edu
CALL FOR PAPERS
4-8 page submissions due March 26th, 2014
Probabilistic models have had broad impact in machine learning, both in research and industry. Unfortunately, inference in unrestricted probabilistic models is often intractable. Motivated by the importance of efficient inference for big data applications, a substantial amount of work has been devoted to learning probabilistic models for which inference is guaranteed to be tractable. Examples include sum-product networks (SPNs), hinge-loss Markov random fields (HL-MRFs), and tractable higher-order potentials (HOPs). This broad family also includes regularization approaches that ensure that the learnt graphical models have polynomial circuit representations, approaches that exploit structural properties other than treewidth such as perfectness of the graphical representation, and approaches that bring tractable continuous statistical processes such as determinantal point processes (DPPs) to the discrete world.
While research on the aforementioned topics has seen a lot of activity in recent years, an inclusive meeting has been missing. This workshop will provide a unique opportunity to exchange ideas and insights, leading to a coherent picture of the state-of-the-art, a set of important directions and open problems, and closer collaborations among the different research groups working in this area.
Topics of interest include but are not limited to:
* Novel tractable model classes
* Extensions of existing tractable model classes:
- Novel structure and parameter learning algorithms
- Pushing the expressive power of tractable model classes
- More efficient inference algorithms, implementations
* Tractable statistical relational formalisms:
- Tractable Markov logic (TML)
- Probabilistic soft logic (PSL)
- Tractable probabilistic knowledge and databases
- Tractability through lifted inference
* Properties rendering inference and/or learning tractable with an
emphasis on alternatives to treewidth
- Submodularity, concavity, convexity
- Chordality, perfectness of graphical models
- Model symmetries such as (fractional) automorphisms
- Notions of exchangeability
* Tractability of structure and parameter learning versus tractability
of inference:
- Tractable learning via tractable inference
- Notions of tractable learning (PAC, etc.)
- Bridging the gap between tractability notions in AI/ML and statistics
* Approximate inference with polynomial approximation guarantees:
- Greedy algorithms with approximation guarantees
- Local search algorithms with approximation guarantees
- Sampling algorithms with approximation guarantees
* Applications of tractable model classes:
- Information extraction
- Data integration
- Computer vision
- Computational biology
- Natural language processing
- etc.
Invited speakers include (tentative):
* Jeff Bilmes, University of Washington
* Adnan Darwiche, UCLA
* Nando de Freitas, University of Oxford
* Lise Getoor, University of Maryland
* Tony Jebara, Columbia University
* Richard Zemel, University of Toronto
Important dates:
* Submission deadline: March 26th, 2014
* Notification: April 18th, 2014
* Workshop: June 25th or 26th, 2014
Submission instructions:
We invite submissions of full papers (max 8 pages excluding
references) as well as work-in-progress, position, and challenging
problems papers (max 4 pages excluding references). Papers must be
formatted using the ICML template and submitted online via EasyChair:
https://www.easychair.org/conferences/?conf=ltpm20...
Accepted papers will either be selected for a short oral presentation
or a poster presentation.
While original contributions are preferred, we also invite submissions
of high-quality work that has recently been published in other venues.
Organizers
* Pedro Domingos (University of Washington)
* Mathias Niepert (University of Washington)
* Daniel Lowd (University of Oregon)
Program Committee (to be extended)
* Animashree Anandkumar (UC Irvine)
* Hung Bui (Nuance Labs)
* Adnan Darwiche (UCLA)
* Jesse Davis (KU Leuven)
* Robert Gens (University of Washington)
* Lise Getoor (University of Maryland)
* Vibhav Gogate (University of Texas, Dallas)
* Bert Huang (University of Maryland)
* Tony Jebara (Columbia University)
* Stefanie Jegelka (UC Berkeley)
* Kristian Kersting (TU Dortmund)
* Alex Kulesza (University of Michigan)
* Sriraam Natarajan (Indiana University)
* David Poole (University of British Columbia)
* Sameer Singh (University of Washington)
* David Sontag (New York University)
* Daniel Tarlow (Microsoft Research Cambridge)
* Guy Van den Broeck (KU Leuven)
* Richard Zemel (University of Toronto)
Workshop -AT- ICML 2014
Beijing, China
June 25th - 26th, 2014
http://ltpm2014.cs.washington.edu
CALL FOR PAPERS
4-8 page submissions due March 26th, 2014
Probabilistic models have had broad impact in machine learning, both in research and industry. Unfortunately, inference in unrestricted probabilistic models is often intractable. Motivated by the importance of efficient inference for big data applications, a substantial amount of work has been devoted to learning probabilistic models for which inference is guaranteed to be tractable. Examples include sum-product networks (SPNs), hinge-loss Markov random fields (HL-MRFs), and tractable higher-order potentials (HOPs). This broad family also includes regularization approaches that ensure that the learnt graphical models have polynomial circuit representations, approaches that exploit structural properties other than treewidth such as perfectness of the graphical representation, and approaches that bring tractable continuous statistical processes such as determinantal point processes (DPPs) to the discrete world.
While research on the aforementioned topics has seen a lot of activity in recent years, an inclusive meeting has been missing. This workshop will provide a unique opportunity to exchange ideas and insights, leading to a coherent picture of the state-of-the-art, a set of important directions and open problems, and closer collaborations among the different research groups working in this area.
Topics of interest include but are not limited to:
* Novel tractable model classes
* Extensions of existing tractable model classes:
- Novel structure and parameter learning algorithms
- Pushing the expressive power of tractable model classes
- More efficient inference algorithms, implementations
* Tractable statistical relational formalisms:
- Tractable Markov logic (TML)
- Probabilistic soft logic (PSL)
- Tractable probabilistic knowledge and databases
- Tractability through lifted inference
* Properties rendering inference and/or learning tractable with an
emphasis on alternatives to treewidth
- Submodularity, concavity, convexity
- Chordality, perfectness of graphical models
- Model symmetries such as (fractional) automorphisms
- Notions of exchangeability
* Tractability of structure and parameter learning versus tractability
of inference:
- Tractable learning via tractable inference
- Notions of tractable learning (PAC, etc.)
- Bridging the gap between tractability notions in AI/ML and statistics
* Approximate inference with polynomial approximation guarantees:
- Greedy algorithms with approximation guarantees
- Local search algorithms with approximation guarantees
- Sampling algorithms with approximation guarantees
* Applications of tractable model classes:
- Information extraction
- Data integration
- Computer vision
- Computational biology
- Natural language processing
- etc.
Invited speakers include (tentative):
* Jeff Bilmes, University of Washington
* Adnan Darwiche, UCLA
* Nando de Freitas, University of Oxford
* Lise Getoor, University of Maryland
* Tony Jebara, Columbia University
* Richard Zemel, University of Toronto
Important dates:
* Submission deadline: March 26th, 2014
* Notification: April 18th, 2014
* Workshop: June 25th or 26th, 2014
Submission instructions:
We invite submissions of full papers (max 8 pages excluding
references) as well as work-in-progress, position, and challenging
problems papers (max 4 pages excluding references). Papers must be
formatted using the ICML template and submitted online via EasyChair:
https://www.easychair.org/conferences/?conf=ltpm20...
Accepted papers will either be selected for a short oral presentation
or a poster presentation.
While original contributions are preferred, we also invite submissions
of high-quality work that has recently been published in other venues.
Organizers
* Pedro Domingos (University of Washington)
* Mathias Niepert (University of Washington)
* Daniel Lowd (University of Oregon)
Program Committee (to be extended)
* Animashree Anandkumar (UC Irvine)
* Hung Bui (Nuance Labs)
* Adnan Darwiche (UCLA)
* Jesse Davis (KU Leuven)
* Robert Gens (University of Washington)
* Lise Getoor (University of Maryland)
* Vibhav Gogate (University of Texas, Dallas)
* Bert Huang (University of Maryland)
* Tony Jebara (Columbia University)
* Stefanie Jegelka (UC Berkeley)
* Kristian Kersting (TU Dortmund)
* Alex Kulesza (University of Michigan)
* Sriraam Natarajan (Indiana University)
* David Poole (University of British Columbia)
* Sameer Singh (University of Washington)
* David Sontag (New York University)
* Daniel Tarlow (Microsoft Research Cambridge)
* Guy Van den Broeck (KU Leuven)
* Richard Zemel (University of Toronto)
Other CFPs
- Workshop on Customers Value Optimization in Digital Marketing
- The 13th IEEE International on Ubiquitous Computing and Communications(IUCC2014)
- 13th International Symposium on Pervasive Systems, Algorithms, and Networks (I-SPAN 2014)
- 8th International Conference on Frontier of Computer Science and Technology (FCST2014)
- The 17th IEEE International Conference on Computational Science and Engineering(CSE2014)
Last modified: 2014-03-05 22:55:53