ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

LFU 2017 - 2017 Workshop on Logical Foundations for Uncertainty and Learning

Date2017-08-19

Deadline2017-05-15

VenueMelbourne, Australia Australia

Keywords

Websitehttps://homepages.inf.ed.ac.uk/vbelle/wo...

Topics/Call fo Papers

The purpose of this workshop is to promote logical foundations for reasoning and learning under uncertainty. Uncertainty is inherent in many AI applications, and coping with this uncertainty, in terms of preferences, probabilities and weights, is essential for the system to operate purposefully. In the same vein, expecting a domain modeler to completely characterize a system is often unrealistic, and so enabling mechanisms by means of which the system can infer and learn about the environment is needed. While probabilistic reasoning and Bayesian learning has enjoyed many successes and is central to our current understanding of the data revolution, a deeper investigation on the underlying semantical issues as well as principled ways of extending the frameworks to richer settings is what this workshop strives for. Broadly speaking, we aim to bring together the many communities focused on uncertainty reasoning and learning -- including knowledge representation, machine learning, logic programming and databases -- by focusing on the logical underpinnings of the approaches and techniques.
Given the intent of the workshop, we encourage two categories of submissions:
On the practical side, we solicit papers that propose ways to bridge conventional learning and inference techniques with deductive and inductive reasoning. Driven by the successes of relational graphical models and statistical relational learning, we especially encourage papers that emphasize or demonstrate non-standard logical features in systems, e.g. the ability to handle infinite domains, existential uncertainty and/or function symbols.
On the foundations side, we solicit papers that explicate the use of weights in reasoning and learning, e.g. the use of weight functions such as degrees of belief, preferences, and truth degrees. We especially encourage papers that demonstrate how non-standard weight functions for reasoning and learning can be better integrated with existing probabilistic methods. The idea, then, is to foster collaboration between machine learning practitioners and the weighted logic community. For example, we encourage papers that revisit the learning objectives and inference methodologies of existing systems, and propose novel semantical frameworks to understand them.
In sum, topics include (but are not limited to):
Probabilistic and weighted databases and knowledge bases
Integration of deductive and inductive reasoning with Bayesian inference and learning
Semantical foundations for machine learning
Logics for data-intensive information processing, such as data fusion
Extension of statistical relational learning with generic weight functions
Declarative methods for inference and learning

Last modified: 2017-05-13 11:56:25