ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

WHI 2016 - Workshop on Human Interpretability in Machine Learning (WHI 2016)

Date2016-06-23

Deadline2016-05-01

VenueNew York, NY, USA - United States USA - United States

Keywords

Websitehttps://sites.google.com/site/2016whi

Topics/Call fo Papers

Doctors, judges, business executives, and many other decision makers are faced with making critical choices that can have profound consequences. Such decisions are increasingly being supported by predictive models learned by algorithms from historical data. The latest trend in machine learning is to use very sophisticated systems involving deep neural networks, nonlinear kernel methods, and large ensembles of diverse classifiers. While such approaches produce impressive, state-of-the art prediction accuracies, they often give little comfort to decision makers, who must trust their output blindly because very little insight is available about their inner workings and the provenance of how the decision was made. Therefore, in order for predictions to be adopted, trusted, and safely used by decision makers in mission-critical applications, it is imperative to develop machine learning methods that produce interpretable models with excellent predictive accuracy. It is in this way that machine learning methods can have impact on consequential real-world applications.
This workshop will bring together researchers who study the interpretability of predictive models, develop interpretable machine learning algorithms, and develop methodology to interpret black?box machine learning models (e.g., post?-hoc interpretations). This is a very exciting time to study interpretable machine learning, as the advances in large?-scale optimization and Bayesian inference that have enabled the rise of black?box machine learning are now also starting to be exploited to develop principled approaches to large?-scale interpretable machine learning. Participants in the workshop will exchange ideas on these and allied topics, including:
* Quantifying and axiomatizing interpretability including relationship to model complexity,
* Psychology of human concept learning,
* Rule learning, symbolic regression and case-based reasoning,
* Generalized additive models, sparsity and interpretability,
* Interpretable unsupervised models (clustering, topic models, etc.),
* Interpretation of black-box models (including deep neural networks),
* Causality of predictive models,
* Verifying, diagnosing and debugging machine learning systems,
* Visual analytics, and
* Interpretability in reinforcement learning.
---
Invited speakers:
---
* Susan Athey, Stanford University
* Rich Caruana, Microsoft Research
* Jacob Feldman, Rutgers University
* Percy Liang, Stanford University
* Hanna Wallach, University of Massachusetts and Microsoft Research
---
Important dates:
---
* Submission deadline: May 1, 2016
* Notification: May 10, 2016
* Workshop: June 23 or 24, 2016
---
Submission instructions:
---
We invite submissions of full papers (maximum 4 pages excluding references) as well as works-in-progress, position papers, and papers describing open problems and challenges. Papers must be formatted using the ICML template and submitted online via:
https://cmt3.research.microsoft.com/WHI2016.
Accepted papers will be selected for a short oral presentation or poster presentation and published in proceedings overlayed on arXiv. While original contributions are preferred, we also invite submissions of high-quality work that has recently been published in other venues.
---
Organizing Committee:
---
Chairs:
* Been Kim, Allen Institute for Artificial Intelligence
* Dmitry Malioutov, IBM Research
* Kush Varshney, IBM Research
Committee Members:
* Bart Baesens, Katholieke Universiteit Leuven
* Murray Campbell, IBM Research
* Brian Dalessandro, Facebook
* Finale Doshi-Velez, Harvard University
* Alex Freitas, University of Kent
* Johannes F¸rnkranz, Technische Universit‰t Darmstadt
* Maya Gupta, Google Research
* Tin Kam Ho, IBM Watson
* Nitin Indurkhya, University of New South Wales
* Himabindu Lakkaraju, Stanford University
* Cynthia Rudin, Massachusetts Institute of Technology
* Yisong Yue, California Institute of Technology

Last modified: 2016-04-02 15:10:40