IDM 2017 - Workshop on Interpretable Data Mining (IDM) – Bridging the Gap between Shallow and Deep Models
Topics/Call fo Papers
Intelligent systems built upon complex machine learning and data mining models (e.g., deep neural networks) have shown superior performances on various real-world applications. However, their effectiveness is limited by the difficulty in interpreting the resultant prediction mechanisms or how the results are obtained. In contrast, the results of many simple or shallow models, such as rule-based or tree-based methods, are fairly explainable but not sufficiently accurate. The challenge remains for how to take better advantage of the emerging big data while preserving the interpretability. This workshop is about interpreting the prediction mechanisms or results of the complex computational models for data mining by taking advantage of simple models which are easier to understand.
Model interpretability enables the systems to be clearly understood, properly trusted, effectively managed and widely adopted by end users. Interpretations are necessary in applications such as medical diagnosis, fraud detection and object recognition where valid reasons would be significantly helpful, if not necessary, before taking actions based on predictions. For example, we may build a system to predict an individual’s health condition based on his/her historical health records. By interpreting the structure of the model, we could identify the diagnosis codes that contribute to the disease, if any, predicted by the model. Such insights on the relationship between features and predictions may also provide valuable information for building more effective models.
This half-day workshop is focused on interpretable models for data mining. The interpretability may be achieved from multiple aspects such as 1) explaining the factors that lead to the prediction outcomes; 2) extracting the prediction rules from complex models; 3) reorganizing the interpretation, if already available, based on domain-specific knowledge that can be more easily understood by users; 4) designing effective user interface to facilitate interaction between human and machines. We wish to exchange ideas on recent approaches to the challenges of model interpretability, identify emerging fields of applications for such techniques, and provide opportunities for relevant interdisciplinary research or projects.
Model interpretability enables the systems to be clearly understood, properly trusted, effectively managed and widely adopted by end users. Interpretations are necessary in applications such as medical diagnosis, fraud detection and object recognition where valid reasons would be significantly helpful, if not necessary, before taking actions based on predictions. For example, we may build a system to predict an individual’s health condition based on his/her historical health records. By interpreting the structure of the model, we could identify the diagnosis codes that contribute to the disease, if any, predicted by the model. Such insights on the relationship between features and predictions may also provide valuable information for building more effective models.
This half-day workshop is focused on interpretable models for data mining. The interpretability may be achieved from multiple aspects such as 1) explaining the factors that lead to the prediction outcomes; 2) extracting the prediction rules from complex models; 3) reorganizing the interpretation, if already available, based on domain-specific knowledge that can be more easily understood by users; 4) designing effective user interface to facilitate interaction between human and machines. We wish to exchange ideas on recent approaches to the challenges of model interpretability, identify emerging fields of applications for such techniques, and provide opportunities for relevant interdisciplinary research or projects.
Other CFPs
Last modified: 2017-05-13 11:42:35