FAT/ML 2017 - 4th Workshop on Fairness, Accountability, and Transparency in Machine Learning (FAT/ML 2017)
Topics/Call fo Papers
4th Workshop on Fairness, Accountability, and Transparency in Machine Learning (FAT/ML 2017)
Co-located with 23rd SIGKDD conference on Knowledge Discovery and Data Mining (KDD 2017)
14 August 2017, Halifax, Nova Scotia, Canada
Submission Deadline: 26 May 2017, 23:59 Anywhere on Earth (AoE)
fatml.org
===
OVERVIEW
---
This workshop aims to bring together a growing community of researchers, practitioners, and policymakers concerned with fairness, accountability, and transparency in machine learning. The past few years have seen growing recognition that machine learning raises novel ethical, policy, and legal challenges. In particular, policymakers, regulators, and advocates have expressed fears about the potentially discriminatory impact of machine learning and data-driven systems, with many calling for further technical research into the dangers of inadvertently encoding bias into automated decisions. At the same time, there is increasing alarm that the complexity of machine learning and opaqueness of data mining processes may reduce the justification for consequential decisions to "the algorithm made me do it" or "this is what the model says." The goal of this workshop is to provide researchers with a venue to explore how to characterize and address these issues with computationally rigorous methods. We seek contributions that attempt to measure and mitigate bias in machine learning, to audit and evaluate machine learning models, and to render such models more interpretable and their decisions more explainable.
This year, the workshop is co-located with the 23rd SIGKDD conference on Knowledge Discovery and Data Mining (KDD 2017). The workshop will consist of a mix of invited talks, invited panels, and contributed talks. We welcome paper submissions that address any issue of fairness, accountability, and transparency related to machine learning. This year, we place a special emphasis on papers describing how to bring tools for ensuring fairness, accountability, or transparency into real world applications of machine learning. We especially welcome submissions from practitioners in industry, government, and civil society.
TOPICS OF INTEREST
---
Fairness:
• Can we develop new computational techniques for discrimination-aware data mining? How should we handle, for example, bias in training data sets
• How should we formalize fairness? What does it mean for an algorithm to be fair?
• Should we look only to the law for definitions of fairness? Are legal definitions sufficient? Can legal definitions even be translated to practical algorithmic contexts?
• Can we develop definitions of discrimination and disparate impact that move beyond distributional constraints such as the Equal Employment Opportunity Commission’s 80% rule, or constraints such as demographic parity?
• Who decides what counts as fair when fairness becomes a machine learning objective?
• Are there any dangers in turning questions of fairness into computational problems?
Accountability:
• What would human review entail if models were available for direct inspection?
• Are there practical methods to test existing algorithms for compliance with a policy?
• Can we prove that an algorithm behaves in some way without having to reveal the algorithm? Can we achieve accountability without transparency?
• How can we conduct reliable empirical black-box testing and/or reverse engineer algorithms to test for ethically salient differential treatment?
• Can we demonstrate the causal origins of the outcome predicted by a model?
• What constitutes sufficient evidence to someone other than the creator of a model that the model functions as intended? Can we describe the goals of modeling effectively?
• What are the societal implications of autonomous experimentation? How can we manage the risks that such experimentation might pose to users?
Transparency:
• How can we develop interpretable machine learning methods that provide ways to manage the complexity of a model and/or generate meaningful explanations?
• Can we field interpretable methods in a way that does not reveal private information used in the construction of the model?
• Can we use adversarial conditions to learn about the inner workings of inscrutable algorithms? Can we learn from the ways they fail on edge cases?
• How can we use game theory and machine learning to build fully transparent, but robust models using signals that people would face severe costs in trying to manipulate?
PAPER SUBMISSION
---
Papers are limited to 4 content pages, including figures and tables, and should use a standard 2-column, 11pt format; however, an additional fifth page containing only cited references is permitted. Papers must be anonymized for double-blind reviewing. Accepted papers will be made available on the workshop website and should also be posted by the authors to arXiv; however, the workshop's proceedings will be considered non-archival, meaning that contributors are free to publish their work in archival journals or conferences. Accepted papers will be either presented as a talk or poster (to be determined by the workshop organizers). We only wish to consider papers that have not yet been published elsewhere.
Papers should be submitted via Easy Chair: https://easychair.org/conferences/?conf=fatml2017
• Paper Submissions Deadline: 26 May 2017, 23:59 Anywhere on Earth (AoE)
• Notification to Authors: 16 June 2017
• Camera-Ready Papers Deadline: 30 June 2017
ORGANIZERS
---
Solon Barocas, Microsoft Research and Cornell University
Sorelle Friedler, Haverford College
Joshua A. Kroll, Independent
Suresh Venkatasubramanian, University of Utah
Hanna Wallach, Microsoft Research and University of Massachusetts Amherst
Co-located with 23rd SIGKDD conference on Knowledge Discovery and Data Mining (KDD 2017)
14 August 2017, Halifax, Nova Scotia, Canada
Submission Deadline: 26 May 2017, 23:59 Anywhere on Earth (AoE)
fatml.org
===
OVERVIEW
---
This workshop aims to bring together a growing community of researchers, practitioners, and policymakers concerned with fairness, accountability, and transparency in machine learning. The past few years have seen growing recognition that machine learning raises novel ethical, policy, and legal challenges. In particular, policymakers, regulators, and advocates have expressed fears about the potentially discriminatory impact of machine learning and data-driven systems, with many calling for further technical research into the dangers of inadvertently encoding bias into automated decisions. At the same time, there is increasing alarm that the complexity of machine learning and opaqueness of data mining processes may reduce the justification for consequential decisions to "the algorithm made me do it" or "this is what the model says." The goal of this workshop is to provide researchers with a venue to explore how to characterize and address these issues with computationally rigorous methods. We seek contributions that attempt to measure and mitigate bias in machine learning, to audit and evaluate machine learning models, and to render such models more interpretable and their decisions more explainable.
This year, the workshop is co-located with the 23rd SIGKDD conference on Knowledge Discovery and Data Mining (KDD 2017). The workshop will consist of a mix of invited talks, invited panels, and contributed talks. We welcome paper submissions that address any issue of fairness, accountability, and transparency related to machine learning. This year, we place a special emphasis on papers describing how to bring tools for ensuring fairness, accountability, or transparency into real world applications of machine learning. We especially welcome submissions from practitioners in industry, government, and civil society.
TOPICS OF INTEREST
---
Fairness:
• Can we develop new computational techniques for discrimination-aware data mining? How should we handle, for example, bias in training data sets
• How should we formalize fairness? What does it mean for an algorithm to be fair?
• Should we look only to the law for definitions of fairness? Are legal definitions sufficient? Can legal definitions even be translated to practical algorithmic contexts?
• Can we develop definitions of discrimination and disparate impact that move beyond distributional constraints such as the Equal Employment Opportunity Commission’s 80% rule, or constraints such as demographic parity?
• Who decides what counts as fair when fairness becomes a machine learning objective?
• Are there any dangers in turning questions of fairness into computational problems?
Accountability:
• What would human review entail if models were available for direct inspection?
• Are there practical methods to test existing algorithms for compliance with a policy?
• Can we prove that an algorithm behaves in some way without having to reveal the algorithm? Can we achieve accountability without transparency?
• How can we conduct reliable empirical black-box testing and/or reverse engineer algorithms to test for ethically salient differential treatment?
• Can we demonstrate the causal origins of the outcome predicted by a model?
• What constitutes sufficient evidence to someone other than the creator of a model that the model functions as intended? Can we describe the goals of modeling effectively?
• What are the societal implications of autonomous experimentation? How can we manage the risks that such experimentation might pose to users?
Transparency:
• How can we develop interpretable machine learning methods that provide ways to manage the complexity of a model and/or generate meaningful explanations?
• Can we field interpretable methods in a way that does not reveal private information used in the construction of the model?
• Can we use adversarial conditions to learn about the inner workings of inscrutable algorithms? Can we learn from the ways they fail on edge cases?
• How can we use game theory and machine learning to build fully transparent, but robust models using signals that people would face severe costs in trying to manipulate?
PAPER SUBMISSION
---
Papers are limited to 4 content pages, including figures and tables, and should use a standard 2-column, 11pt format; however, an additional fifth page containing only cited references is permitted. Papers must be anonymized for double-blind reviewing. Accepted papers will be made available on the workshop website and should also be posted by the authors to arXiv; however, the workshop's proceedings will be considered non-archival, meaning that contributors are free to publish their work in archival journals or conferences. Accepted papers will be either presented as a talk or poster (to be determined by the workshop organizers). We only wish to consider papers that have not yet been published elsewhere.
Papers should be submitted via Easy Chair: https://easychair.org/conferences/?conf=fatml2017
• Paper Submissions Deadline: 26 May 2017, 23:59 Anywhere on Earth (AoE)
• Notification to Authors: 16 June 2017
• Camera-Ready Papers Deadline: 30 June 2017
ORGANIZERS
---
Solon Barocas, Microsoft Research and Cornell University
Sorelle Friedler, Haverford College
Joshua A. Kroll, Independent
Suresh Venkatasubramanian, University of Utah
Hanna Wallach, Microsoft Research and University of Massachusetts Amherst
Other CFPs
Last modified: 2017-04-12 22:21:49