FATML 2015 - 2nd Workshop on Fairness, Accountability, and Transparency in Machine Learning
Topics/Call fo Papers
2nd Workshop on Fairness, Accountability, and Transparency in Machine Learning
ICML 2015
July 11, Lille, France
http://fatml.org/
Submission Deadline: May 1, 2015
===
OVERVIEW
---
Machine learning is increasingly part of our everyday lives, influencing not only our individual interactions with online websites and platforms, but even national policy decisions that shape society at large. When algorithms make automated decisions that can affect our lives so profoundly, how do we make sure that their decisions are fair, verifiable, and accountable? This workshop will explore how to integrate these concerns into machine learning and how to address them with computationally rigorous methods.
The workshop takes place at an important moment. The debate about ‘big data' on both sides of the Atlantic has begun to expand beyond issues of privacy and data protection. Policymakers, regulators, and advocates have recently expressed fears about the potentially discriminatory impact of analytics, with many calling for further technical research into the dangers of inadvertently encoding bias into automated decisions. At the same time, there is growing alarm that the complexity of machine learning may reduce the justification for consequential decisions to “the algorithm made me do it”. Decision procedures perceived as fundamentally inscrutable have drawn special attention.
The workshop will bring together an interdisciplinary group of researchers to address these challenges head-on.
TOPICS OF INTEREST
---
We welcome contributions on theoretical models, empirical work, and everything in between, including (but not limited to) contributions that address the following open questions:
* How can we achieve high classification accuracy while preventing discriminatory biases?
* What are meaningful formal fairness properties?
* What is the best way to represent how a classifier or model has generated a particular result?
* Can we certify that some output has an explanatory representation?
* How do we balance the need for knowledge of sensitive attributes for fair modeling and classification with concerns and limitations around the collection and use of sensitive attributes?
* What ethical obligations does the machine learning community have when models affect the lives of real people?
PAPER SUBMISSION
---
Papers are limited to four content pages, including figures and tables, and must follow the ICML 2015 format; however, an additional fifth page containing only cited references is permitted. Papers SHOULD be anonymized. Accepted papers will be made available on the workshop website; however, the workshop's proceedings can be considered non-archival, meaning contributors are free to publish their work in archival journals or conferences. Accepted papers will be either presented as a talk or poster (to be determined by the workshop organizers).
Papers should be submitted here: https://easychair.org/conferences/?conf=fatml2015
Deadline for submissions: May 1, 2015
Notification of acceptance: May 10, 2015
ORGANIZATION
---
Workshop Organizers:
Solon Barocas, Princeton University
Sorelle Friedler, Haverford College
Moritz Hardt, IBM Almaden Research Center
Joshua Kroll, Princeton University
Carlos Scheidegger, University of Arizona
Suresh Venkatasubramanian, University of Utah
Hanna Wallach, Microsoft Research NYC
ICML 2015
July 11, Lille, France
http://fatml.org/
Submission Deadline: May 1, 2015
===
OVERVIEW
---
Machine learning is increasingly part of our everyday lives, influencing not only our individual interactions with online websites and platforms, but even national policy decisions that shape society at large. When algorithms make automated decisions that can affect our lives so profoundly, how do we make sure that their decisions are fair, verifiable, and accountable? This workshop will explore how to integrate these concerns into machine learning and how to address them with computationally rigorous methods.
The workshop takes place at an important moment. The debate about ‘big data' on both sides of the Atlantic has begun to expand beyond issues of privacy and data protection. Policymakers, regulators, and advocates have recently expressed fears about the potentially discriminatory impact of analytics, with many calling for further technical research into the dangers of inadvertently encoding bias into automated decisions. At the same time, there is growing alarm that the complexity of machine learning may reduce the justification for consequential decisions to “the algorithm made me do it”. Decision procedures perceived as fundamentally inscrutable have drawn special attention.
The workshop will bring together an interdisciplinary group of researchers to address these challenges head-on.
TOPICS OF INTEREST
---
We welcome contributions on theoretical models, empirical work, and everything in between, including (but not limited to) contributions that address the following open questions:
* How can we achieve high classification accuracy while preventing discriminatory biases?
* What are meaningful formal fairness properties?
* What is the best way to represent how a classifier or model has generated a particular result?
* Can we certify that some output has an explanatory representation?
* How do we balance the need for knowledge of sensitive attributes for fair modeling and classification with concerns and limitations around the collection and use of sensitive attributes?
* What ethical obligations does the machine learning community have when models affect the lives of real people?
PAPER SUBMISSION
---
Papers are limited to four content pages, including figures and tables, and must follow the ICML 2015 format; however, an additional fifth page containing only cited references is permitted. Papers SHOULD be anonymized. Accepted papers will be made available on the workshop website; however, the workshop's proceedings can be considered non-archival, meaning contributors are free to publish their work in archival journals or conferences. Accepted papers will be either presented as a talk or poster (to be determined by the workshop organizers).
Papers should be submitted here: https://easychair.org/conferences/?conf=fatml2015
Deadline for submissions: May 1, 2015
Notification of acceptance: May 10, 2015
ORGANIZATION
---
Workshop Organizers:
Solon Barocas, Princeton University
Sorelle Friedler, Haverford College
Moritz Hardt, IBM Almaden Research Center
Joshua Kroll, Princeton University
Carlos Scheidegger, University of Arizona
Suresh Venkatasubramanian, University of Utah
Hanna Wallach, Microsoft Research NYC
Other CFPs
- 2015 Workshop on Automatic Machine Learning (AutoML)
- 9th International Symposium on Foundations of Information and Knowledge Systems
- Smart Grids: A Hub of Interdisciplinary Research
- Symposium on Ensemble Networks and Software Engineering (SENSE-2015)
- Corrective and Preventive Action (CAPA) and Complaint Handling in Medical Device. - By Compliance Global Inc
Last modified: 2015-04-17 23:11:01