ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

WHI 2017 - Second Annual ICML Workshop on Human Interpretability in Machine Learning (WHI 2017)

Date2017-08-10

Deadline2017-06-16

VenueSydney, Australia Australia

Keywords

Websitehttps://sites.google.com/view/whi2017

Topics/Call fo Papers

Doctors, judges, business executives, and many other decision makers are faced with making critical choices that can have profound consequences. Such decisions are increasingly being supported by predictive models learned by algorithms from historical data. The latest trend in machine learning is to use very sophisticated systems involving deep neural networks, nonlinear kernel methods, and large ensembles of diverse classifiers. While such approaches produce impressive, state-of-the art prediction accuracies, they often give little comfort to decision makers, who must trust their output blindly because very little insight is available about their inner workings and the provenance of how the decision was made. Therefore, in order for predictions to be adopted, trusted, and safely used by decision makers in mission-critical applications, it is imperative to develop machine learning methods that produce interpretable models with excellent predictive accuracy. It is in this way that machine learning methods can have impact on consequential real-world applications.
This workshop will bring together researchers who study the interpretability of predictive models, develop interpretable machine learning algorithms, and develop methodology to interpret black­box machine learning models. This is a very exciting time to study interpretable machine learning, as the advances in large­scale optimization and Bayesian inference that have enabled the rise of black­box machine learning are now also starting to be exploited to develop principled approaches to large­scale interpretable machine learning. Participants in the workshop will exchange ideas on these and allied topics, including:
* Quantifying and axiomatizing interpretability including relationship to model complexity,
* Psychology of human concept learning,
* Rule learning, symbolic regression and case-based reasoning,
* Generalized additive models, sparsity and interpretability,
* Interpretable unsupervised models (clustering, topic models, etc.),
* Interpretation of black-box models (including deep neural networks),
* Causality of predictive models,
* Verifying, diagnosing and debugging machine learning systems,
* Visual analytics, and
* Interpretability in reinforcement learning.
---
Important dates:
---
* Submission deadline: June 16, 2017
* Notification: June 30, 2017
* Workshop: August 10, 2017
---
Submission instructions:
---
We invite submissions of full papers as well as works-in-progress, position papers, and papers describing open problems and challenges. While original contributions are preferred, we also invite submissions of high-quality work that has recently been published in other venues or is concurrently submitted.
Papers should be 4-6 pages in length (excluding references and acknowledgements) formatted using the ICML template and submitted online via the link available at https://sites.google.com/view/whi2017. We expect submissions to be 4 pages but will allow up to 6 pages.
Accepted papers will be selected for a short oral presentation or poster presentation.
---
Organizers:
---
* Been Kim, Google Brain
* Dmitry Malioutov, The D. E. Shaw Group
* Kush Varshney, IBM Research
* Adrian Weller, University of Cambridge

Last modified: 2017-06-01 23:44:07