ACCEPTED PAPERS (WORKSHOPS)
Nielsen, A., & Woemmel, A. (2024). Invisible Inequities: Confronting Age-Based Discrimination in Machine Learning Research and Applications. In: 2nd ICML Workshop on Generative AI and Law. [Paper]
Abstract: Despite heightened awareness of fairness issues within the machine learning (ML) community, there remains a concerning silence regarding discrimination against a rapidly growing and historically vulnerable group: older adults. We present examples of age-based discrimination in generative AI and other pervasive ML applications, document the implicit and explicit marginalization of age as a protected category of interest in ML research, and identify some technical and legal factors that may contribute to the lack of discussion or action regarding this discrimination. Our aim is to deepen understanding of this frequently ignored yet pervasive form of discrimination and to urge ML researchers, legal scholars, and technology companies to proactively address and reduce it in the development, application, and governance of ML technologies. This call is particularly urgent in light of the expected widespread adoption of generative AI in many areas of public and private life.
WORKING PAPERS
Fragile AI Optimism [Working Paper] [AEA Registry] [Poster]
first author, with Hendrik Hüning and Lydia Mechtenberg
Winner of the 2024 Theodore Eisenberg Poster Prize at the Conference on Empirical Legal Studies (CELS), Atlanta.
Best Paper in the area "Technology, Privacy, and Information" at the American Law & Economics Association Conference (ALEA) 2023, Boston.
Abstract: We study how public attitudes toward AI form and shift using an online deliberation experiment with 2,358 UK citizens in the context of criminal justice. First, we replicate prior survey evidence suggesting public support for adopting AI as a decision-support tool, particularly when certain fairness features are met. We then show that this stated support is fragile: it declines significantly with group deliberation, as supporters are 2.6 times more likely than opponents to change their attitudes. Quantitative text analysis indicates that opponents contribute more arguments in group deliberation, both in terms of frequency and topic range, and supporters are more responsive to counterarguments. These results suggest that stated support for AI reflects lower attitude strength as it appears to be easily raised through informational framing but quickly reversed through deliberation. More broadly, they caution against inferring public legitimacy of increased AI deployment from stated support alone.
(Previously circulated as: "Public Attitudes Toward Algorithmic Risk Assessments in Courts: A Deliberation Experiment")
Behavioral Barriers to Algorithmic Fairness [Currently revising paper, coming soon!] [AEA Registry]
Abstract: Fairness constraints in algorithm design aim to reduce discrimination. Their impact, however, also depends on the adoption of the algorithm by human-decision makers as they typically retain full authority in high-stakes contexts. In a hiring experiment, I find suggestive evidence that protecting group membership in algorithmic predictions leads individuals to be more conservative in updating their beliefs about candidates based on these predictions. I then find a significant increase in discrimination in their hiring of candidates under this algorithm, driven by those who initially believe that group membership predicts performance. Finally, independent of the algorithm features, about 26% of participants make hiring decisions that cannot be explained by beliefs and are likely based on taste. These results suggest that algorithmic fairness features can paradoxically exacerbate human discrimination based on statistical beliefs by hindering adoption and, unsurprisingly, remain orthogonal to taste-based discrimination.
Gender and Socioeconomic Gaps in Digital Skills: Actual and Perceived [SSRN Working Paper] [OSF]
Abstract: This paper documents gender and socioeconomic gaps in digital skills relevant to the labor market, using a representative German household sample. Men and individuals with a higher level of education show greater proficiency across all skill dimensions. Both groups also hold more optimistic beliefs about outperforming others, conditional on actual proficiency. These belief gaps are not driven by overconfidence, but by underconfidence among women and individuals with lower educational backgrounds at the upper end of the skill distribution. Early-life socioeconomic background is not significantly associated with adult digital skills or beliefs.
Digital Skills: Social Disparities and the Impact of Early Mentoring [CESifo Working Paper] [AER Registry] [SOEP-IS Module]
with Fabian Kosse and Tim Leffler
Abstract: We analyze social disparities in digital skills, their relevance for labor market outcomes, and the long-term impact of an early childhood mentoring program. Drawing on a representative survey and a randomized controlled trial (RCT), we distinguish between proficiency and confidence in digital skills. We document three key findings. First, both skill levels and confidence are strong and similarly sized predictors of labor market earnings. Second, we find marked gender and socioeconomic disparities: males exhibit higher proficiency and confidence than females, while SES gaps only exist for males, and only in the confidence dimension. Third, we provide causal evidence that early mentoring programs can reduce this SES gap. Low SES males who received mentoring show significantly higher digital skill confidence a decade later. The effect is concentrated among those with initially low confidence and does not increase overconfidence. Mediation analysis suggests that roughly half of the effect operates through improvements in general self-concept and educational attainment.
PhD Thesis
On Human Factors in Machine Fairness: Essays in Behavioral Economics [Thesis]
written at the Department of Economics, University of Hamburg