Third PSA Women’s Caucus Prize Symposium

November 21, 2020, Baltimore, MD, USA

The PSA Women’s Caucus is delighted to announce the third Women’s Caucus Prize Symposium, “Conceptual and Methodological Challenges in Algorithmic Fairness,” which will take place at the PSA2020 in Baltimore, MD. This symposium will be held directly before or after the PSA Women’s Caucus business meeting/lunch on Saturday, November 21, allowing PSA Women’s Caucus membership to attend en masse. We hope you’ll be able to join us as we celebrate outstanding philosophy of science done with an eye to inclusivity.

The 2020 PSA Women’s Caucus Prize Symposium, organized by Sina Fazelpour and Daniel Malinsky, was selected from a very competitive pool of applicants for its exceptional quality and relevance to our membership. “Conceptual and Methodological Challenges in Algorithmic Fairness” addresses the broad question of how predictive models trained on imperfect data may amplify disparities, inequalities, and biases. The emerging field of algorithmic fairness aims to orient algorithm design towards respecting ideals of fairness and justice. This multidisciplinary symposium will bridge the technical literature on algorithmic fairness with philosophy of science by introducing philosophers of science to some of the main challenges faced by applied researchers and bringing to bear philosophical analysis on some of the contested ingredients of fair machine learning proposals. Participants come from backgrounds in philosophy, computer science, sociology, law, and the tech industry.

“Conceptual and Methodological Challenges in Algorithmic Fairness”

 

Organizers: 

Sina Fazelpour (Carnegie Mellon University)

Daniel Malinsky (Johns Hopkins University)

 

Description:

Machine learning (ML) models already drive decisions in numerous sensitive domains, including social services, healthcare, financial services, and criminal justice. People and institutions turn to algorithms not only in order to increase the efficiency of the decision-making process, but also in the hope that automation will result in outcomes that are less prone to the biases that affect human decision-makers. However, a burgeoning body of academic research and investigative journalism has cast doubt on the neutrality of algorithmic decisions. In numerous applications, automation appears to perpetuate or even exacerbate unjustifiable harms against vulnerable communities. This observation has ignited a vibrant field of study on fairness in algorithmic decision-making, in which researchers are developing (1) formal fairness metrics to evaluate algorithms; and (2) mitigation techniques aimed at modifying ML algorithms to achieve pre-specified objectives subject to a (set of) fairness constraint(s).

This research on algorithmic fairness has raised a number of questions: What are appropriate modeling frameworks for understanding the origin of biases in data used by ML algorithms? What is the appropriate formal language for quantifying claims of discrimination and fairness in contexts of allocation, prediction, and decision-making? It has been shown that, except under special circumstances, many formal fairness constraints are mutually irreconcilable. What ought we do (or how should our algorithms behave) in the face of mutually unsatisfiable constraints? What theoretical guarantees – related to performance, uncertainty, transparency – should we expect fair algorithms to satisfy? Following precedent in the United States Civil Rights Act, vulnerable groups that may be potentially harmed by data-driven decisions are called protected groups (sex, race, ethnicity, …). What is the nature and meaning of these categories and how exactly should we understand them in practice within particular socio-historical settings? What do we hope to achieve in offering normative guidance? How, or to what extent, should algorithm design incorporate insights from political philosophy, critical theory, and/or feminist philosophy? To what extent is it possible to offer an algorithmic solution to algorithmic fairness?

This symposium will promote cross-fertilization between philosophy of science and the technical work in algorithmic fairness. ML researchers have proposed a variety of definitions of “fairness” and cognate concepts, many of which are not simultaneously satisfiable. The modeling frameworks used by fairness researchers have included ideas from decision/game theory, causal modeling, and statistics. The algorithmic fairness literature could surely bene t from the attention of philosophers, and philosophers of science ought to be interested in the combination of moral, political, epistemological, and metaphysical issues that arise in this novel context.


Participants and Titles:

  1. “Risks of Compounding Injustices in Automated Recruiting”
    Presenting author: Maria De-Arteaga (Carnegie Mellon University)
    Co-authors: Alexey Romanov (University of Massachusetts Lowell), Hanna Wallach (Microsoft Research), Jennifer Chayes (Microsoft Research), Christian Borgs (Microsoft Research), Alexandra Chouldechova (Carnegie Mellon University), Sahin Geyik (LinkedIn), Krishnaram Kenthapadi (LinkedIn), Anna Rumshisky (University of Massachusetts Lowell), Adam Tauman Kalai (Microsoft Research)
  2. “Racial Categories in Algorithmic Fairness: methodological issues and recommendations”
    Presenting author: Emily Denton (Google)
    Co-authors: Alex Hanna (Google), Andrew Smart (Google), Jamila Smith-Loud (Google)
  3. “A Non-ideal Perspective on Algorithmic Fairness”
    Presenting author: Sina Fazelpour (Carnegie Mellon University)
    Co-authors: Zachary C. Lipton (Carnegie Mellon University)
  4. “What’s Sex Got to Do with Machine Learning?”
    Presenting author: Lily Hu (Harvard University)
    Co-authors: Issa Kohler-Hausmann (Yale University)
  5. “Fairness in Data-driven Decision-making: a causal modeling perspective”
    Presenting author: Daniel Malinsky (Johns Hopkins University)
    Co-authors: Razieh Nabi (Johns Hopkins University), Ilya Shpitser (Johns Hopkins University)

Posted: March 17, 2020